Test Report: KVM_Linux_crio 17719

                    
                      e08a2828f2be3e524baaf41342316dad88935561:2023-12-07:32188
                    
                

Test fail (28/299)

Order failed test Duration
35 TestAddons/parallel/Ingress 161.69
48 TestAddons/StoppedEnableDisable 155
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.96
212 TestMultiNode/serial/PingHostFrom2Pods 3.54
219 TestMultiNode/serial/RestartKeepsNodes 688.93
221 TestMultiNode/serial/StopMultiNode 143.18
228 TestPreload 253.78
234 TestRunningBinaryUpgrade 133.21
243 TestStoppedBinaryUpgrade/Upgrade 324.12
274 TestPause/serial/SecondStartNoReconfiguration 106.78
280 TestStartStop/group/old-k8s-version/serial/Stop 140.24
285 TestStartStop/group/embed-certs/serial/Stop 139.95
289 TestStartStop/group/no-preload/serial/Stop 139.52
291 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.88
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.15
301 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.17
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.12
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.24
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 421.67
305 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 484.59
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 330.72
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 126.94
313 TestStartStop/group/newest-cni/serial/Stop 139.64
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.45
x
+
TestAddons/parallel/Ingress (161.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-757601 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-757601 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-757601 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [57e0b0f5-d87c-4d24-bab1-4a5e62295828] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [57e0b0f5-d87c-4d24-bab1-4a5e62295828] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.023468642s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-757601 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.815472296s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-757601 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.93
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 addons disable ingress-dns --alsologtostderr -v=1: (1.034297371s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 addons disable ingress --alsologtostderr -v=1: (7.751203023s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-757601 -n addons-757601
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 logs -n 25: (1.40362453s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:02 UTC |                     |
	|         | -p download-only-619271                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC | 07 Dec 23 20:03 UTC |
	| delete  | -p download-only-619271                                                                     | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC | 07 Dec 23 20:03 UTC |
	| delete  | -p download-only-619271                                                                     | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC | 07 Dec 23 20:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-542325 | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC |                     |
	|         | binary-mirror-542325                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39835                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-542325                                                                     | binary-mirror-542325 | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC | 07 Dec 23 20:03 UTC |
	| addons  | enable dashboard -p                                                                         | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC |                     |
	|         | addons-757601                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC |                     |
	|         | addons-757601                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-757601 --wait=true                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:03 UTC | 07 Dec 23 20:06 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-757601 addons                                                                        | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:06 UTC | 07 Dec 23 20:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-757601 addons disable                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:06 UTC | 07 Dec 23 20:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | -p addons-757601                                                                            |                      |         |         |                     |                     |
	| ip      | addons-757601 ip                                                                            | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	| addons  | addons-757601 addons disable                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-757601 ssh cat                                                                       | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | /opt/local-path-provisioner/pvc-109e20d2-16b7-43c6-9128-df817164d27d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-757601 addons disable                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | addons-757601                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-757601 ssh curl -s                                                                   | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | addons-757601                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | -p addons-757601                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-757601 addons                                                                        | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-757601 addons                                                                        | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:07 UTC | 07 Dec 23 20:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-757601 ip                                                                            | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:09 UTC | 07 Dec 23 20:09 UTC |
	| addons  | addons-757601 addons disable                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:09 UTC | 07 Dec 23 20:09 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-757601 addons disable                                                                | addons-757601        | jenkins | v1.32.0 | 07 Dec 23 20:09 UTC | 07 Dec 23 20:09 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:03:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:03:09.294814   17445 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:03:09.294946   17445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:03:09.294955   17445 out.go:309] Setting ErrFile to fd 2...
	I1207 20:03:09.294959   17445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:03:09.295108   17445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:03:09.295738   17445 out.go:303] Setting JSON to false
	I1207 20:03:09.296534   17445 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2735,"bootTime":1701976654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:03:09.296587   17445 start.go:138] virtualization: kvm guest
	I1207 20:03:09.298702   17445 out.go:177] * [addons-757601] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:03:09.300180   17445 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:03:09.300186   17445 notify.go:220] Checking for updates...
	I1207 20:03:09.301626   17445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:03:09.303431   17445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:03:09.304856   17445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:03:09.306575   17445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:03:09.308149   17445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:03:09.309603   17445 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:03:09.341930   17445 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 20:03:09.343309   17445 start.go:298] selected driver: kvm2
	I1207 20:03:09.343322   17445 start.go:902] validating driver "kvm2" against <nil>
	I1207 20:03:09.343336   17445 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:03:09.344004   17445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:03:09.344089   17445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:03:09.359275   17445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:03:09.359358   17445 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:03:09.359564   17445 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:03:09.359608   17445 cni.go:84] Creating CNI manager for ""
	I1207 20:03:09.359620   17445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:03:09.359631   17445 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 20:03:09.359640   17445 start_flags.go:323] config:
	{Name:addons-757601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-757601 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:03:09.359770   17445 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:03:09.361752   17445 out.go:177] * Starting control plane node addons-757601 in cluster addons-757601
	I1207 20:03:09.363107   17445 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:03:09.363150   17445 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:03:09.363160   17445 cache.go:56] Caching tarball of preloaded images
	I1207 20:03:09.363283   17445 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:03:09.363297   17445 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:03:09.363630   17445 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/config.json ...
	I1207 20:03:09.363651   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/config.json: {Name:mk79bff07b071e786d69657b58b25fdefce8300b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:09.363803   17445 start.go:365] acquiring machines lock for addons-757601: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:03:09.363861   17445 start.go:369] acquired machines lock for "addons-757601" in 40.203µs
	I1207 20:03:09.363884   17445 start.go:93] Provisioning new machine with config: &{Name:addons-757601 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-757601 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:03:09.363970   17445 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 20:03:09.365847   17445 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1207 20:03:09.366024   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:03:09.366061   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:03:09.380178   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I1207 20:03:09.380606   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:03:09.381112   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:03:09.381133   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:03:09.381468   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:03:09.381637   17445 main.go:141] libmachine: (addons-757601) Calling .GetMachineName
	I1207 20:03:09.381805   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:09.381987   17445 start.go:159] libmachine.API.Create for "addons-757601" (driver="kvm2")
	I1207 20:03:09.382009   17445 client.go:168] LocalClient.Create starting
	I1207 20:03:09.382042   17445 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 20:03:09.468370   17445 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 20:03:09.521977   17445 main.go:141] libmachine: Running pre-create checks...
	I1207 20:03:09.521997   17445 main.go:141] libmachine: (addons-757601) Calling .PreCreateCheck
	I1207 20:03:09.522529   17445 main.go:141] libmachine: (addons-757601) Calling .GetConfigRaw
	I1207 20:03:09.522948   17445 main.go:141] libmachine: Creating machine...
	I1207 20:03:09.522962   17445 main.go:141] libmachine: (addons-757601) Calling .Create
	I1207 20:03:09.523150   17445 main.go:141] libmachine: (addons-757601) Creating KVM machine...
	I1207 20:03:09.524417   17445 main.go:141] libmachine: (addons-757601) DBG | found existing default KVM network
	I1207 20:03:09.525178   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:09.524980   17477 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I1207 20:03:09.530755   17445 main.go:141] libmachine: (addons-757601) DBG | trying to create private KVM network mk-addons-757601 192.168.39.0/24...
	I1207 20:03:09.598472   17445 main.go:141] libmachine: (addons-757601) DBG | private KVM network mk-addons-757601 192.168.39.0/24 created
	I1207 20:03:09.598500   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:09.598462   17477 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:03:09.598527   17445 main.go:141] libmachine: (addons-757601) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601 ...
	I1207 20:03:09.598543   17445 main.go:141] libmachine: (addons-757601) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 20:03:09.598654   17445 main.go:141] libmachine: (addons-757601) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 20:03:09.835136   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:09.834993   17477 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa...
	I1207 20:03:10.056134   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:10.055960   17477 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/addons-757601.rawdisk...
	I1207 20:03:10.056172   17445 main.go:141] libmachine: (addons-757601) DBG | Writing magic tar header
	I1207 20:03:10.056189   17445 main.go:141] libmachine: (addons-757601) DBG | Writing SSH key tar header
	I1207 20:03:10.056204   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:10.056085   17477 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601 ...
	I1207 20:03:10.056219   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601
	I1207 20:03:10.056259   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 20:03:10.056283   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601 (perms=drwx------)
	I1207 20:03:10.056298   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:03:10.056334   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 20:03:10.056363   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 20:03:10.056388   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 20:03:10.056406   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 20:03:10.056420   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 20:03:10.056434   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 20:03:10.056464   17445 main.go:141] libmachine: (addons-757601) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 20:03:10.056472   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home/jenkins
	I1207 20:03:10.056487   17445 main.go:141] libmachine: (addons-757601) DBG | Checking permissions on dir: /home
	I1207 20:03:10.056499   17445 main.go:141] libmachine: (addons-757601) DBG | Skipping /home - not owner
	I1207 20:03:10.056545   17445 main.go:141] libmachine: (addons-757601) Creating domain...
	I1207 20:03:10.057349   17445 main.go:141] libmachine: (addons-757601) define libvirt domain using xml: 
	I1207 20:03:10.057364   17445 main.go:141] libmachine: (addons-757601) <domain type='kvm'>
	I1207 20:03:10.057372   17445 main.go:141] libmachine: (addons-757601)   <name>addons-757601</name>
	I1207 20:03:10.057381   17445 main.go:141] libmachine: (addons-757601)   <memory unit='MiB'>4000</memory>
	I1207 20:03:10.057387   17445 main.go:141] libmachine: (addons-757601)   <vcpu>2</vcpu>
	I1207 20:03:10.057396   17445 main.go:141] libmachine: (addons-757601)   <features>
	I1207 20:03:10.057404   17445 main.go:141] libmachine: (addons-757601)     <acpi/>
	I1207 20:03:10.057416   17445 main.go:141] libmachine: (addons-757601)     <apic/>
	I1207 20:03:10.057438   17445 main.go:141] libmachine: (addons-757601)     <pae/>
	I1207 20:03:10.057446   17445 main.go:141] libmachine: (addons-757601)     
	I1207 20:03:10.057456   17445 main.go:141] libmachine: (addons-757601)   </features>
	I1207 20:03:10.057461   17445 main.go:141] libmachine: (addons-757601)   <cpu mode='host-passthrough'>
	I1207 20:03:10.057469   17445 main.go:141] libmachine: (addons-757601)   
	I1207 20:03:10.057474   17445 main.go:141] libmachine: (addons-757601)   </cpu>
	I1207 20:03:10.057494   17445 main.go:141] libmachine: (addons-757601)   <os>
	I1207 20:03:10.057515   17445 main.go:141] libmachine: (addons-757601)     <type>hvm</type>
	I1207 20:03:10.057534   17445 main.go:141] libmachine: (addons-757601)     <boot dev='cdrom'/>
	I1207 20:03:10.057547   17445 main.go:141] libmachine: (addons-757601)     <boot dev='hd'/>
	I1207 20:03:10.057561   17445 main.go:141] libmachine: (addons-757601)     <bootmenu enable='no'/>
	I1207 20:03:10.057574   17445 main.go:141] libmachine: (addons-757601)   </os>
	I1207 20:03:10.057585   17445 main.go:141] libmachine: (addons-757601)   <devices>
	I1207 20:03:10.057593   17445 main.go:141] libmachine: (addons-757601)     <disk type='file' device='cdrom'>
	I1207 20:03:10.057605   17445 main.go:141] libmachine: (addons-757601)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/boot2docker.iso'/>
	I1207 20:03:10.057618   17445 main.go:141] libmachine: (addons-757601)       <target dev='hdc' bus='scsi'/>
	I1207 20:03:10.057632   17445 main.go:141] libmachine: (addons-757601)       <readonly/>
	I1207 20:03:10.057650   17445 main.go:141] libmachine: (addons-757601)     </disk>
	I1207 20:03:10.057664   17445 main.go:141] libmachine: (addons-757601)     <disk type='file' device='disk'>
	I1207 20:03:10.057678   17445 main.go:141] libmachine: (addons-757601)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 20:03:10.057697   17445 main.go:141] libmachine: (addons-757601)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/addons-757601.rawdisk'/>
	I1207 20:03:10.057706   17445 main.go:141] libmachine: (addons-757601)       <target dev='hda' bus='virtio'/>
	I1207 20:03:10.057725   17445 main.go:141] libmachine: (addons-757601)     </disk>
	I1207 20:03:10.057740   17445 main.go:141] libmachine: (addons-757601)     <interface type='network'>
	I1207 20:03:10.057756   17445 main.go:141] libmachine: (addons-757601)       <source network='mk-addons-757601'/>
	I1207 20:03:10.057764   17445 main.go:141] libmachine: (addons-757601)       <model type='virtio'/>
	I1207 20:03:10.057770   17445 main.go:141] libmachine: (addons-757601)     </interface>
	I1207 20:03:10.057778   17445 main.go:141] libmachine: (addons-757601)     <interface type='network'>
	I1207 20:03:10.057787   17445 main.go:141] libmachine: (addons-757601)       <source network='default'/>
	I1207 20:03:10.057795   17445 main.go:141] libmachine: (addons-757601)       <model type='virtio'/>
	I1207 20:03:10.057801   17445 main.go:141] libmachine: (addons-757601)     </interface>
	I1207 20:03:10.057809   17445 main.go:141] libmachine: (addons-757601)     <serial type='pty'>
	I1207 20:03:10.057816   17445 main.go:141] libmachine: (addons-757601)       <target port='0'/>
	I1207 20:03:10.057824   17445 main.go:141] libmachine: (addons-757601)     </serial>
	I1207 20:03:10.057830   17445 main.go:141] libmachine: (addons-757601)     <console type='pty'>
	I1207 20:03:10.057837   17445 main.go:141] libmachine: (addons-757601)       <target type='serial' port='0'/>
	I1207 20:03:10.057853   17445 main.go:141] libmachine: (addons-757601)     </console>
	I1207 20:03:10.057868   17445 main.go:141] libmachine: (addons-757601)     <rng model='virtio'>
	I1207 20:03:10.057895   17445 main.go:141] libmachine: (addons-757601)       <backend model='random'>/dev/random</backend>
	I1207 20:03:10.057907   17445 main.go:141] libmachine: (addons-757601)     </rng>
	I1207 20:03:10.057936   17445 main.go:141] libmachine: (addons-757601)     
	I1207 20:03:10.057952   17445 main.go:141] libmachine: (addons-757601)     
	I1207 20:03:10.057962   17445 main.go:141] libmachine: (addons-757601)   </devices>
	I1207 20:03:10.057970   17445 main.go:141] libmachine: (addons-757601) </domain>
	I1207 20:03:10.057981   17445 main.go:141] libmachine: (addons-757601) 
	I1207 20:03:10.063631   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:d2:99:e9 in network default
	I1207 20:03:10.064105   17445 main.go:141] libmachine: (addons-757601) Ensuring networks are active...
	I1207 20:03:10.064127   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:10.064675   17445 main.go:141] libmachine: (addons-757601) Ensuring network default is active
	I1207 20:03:10.065033   17445 main.go:141] libmachine: (addons-757601) Ensuring network mk-addons-757601 is active
	I1207 20:03:10.065565   17445 main.go:141] libmachine: (addons-757601) Getting domain xml...
	I1207 20:03:10.066225   17445 main.go:141] libmachine: (addons-757601) Creating domain...
	I1207 20:03:11.495178   17445 main.go:141] libmachine: (addons-757601) Waiting to get IP...
	I1207 20:03:11.495850   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:11.496197   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:11.496228   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:11.496181   17477 retry.go:31] will retry after 243.739665ms: waiting for machine to come up
	I1207 20:03:11.741696   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:11.742151   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:11.742183   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:11.742101   17477 retry.go:31] will retry after 367.012275ms: waiting for machine to come up
	I1207 20:03:12.110850   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:12.111264   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:12.111286   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:12.111215   17477 retry.go:31] will retry after 363.515458ms: waiting for machine to come up
	I1207 20:03:12.476720   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:12.477118   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:12.477140   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:12.477076   17477 retry.go:31] will retry after 419.482562ms: waiting for machine to come up
	I1207 20:03:12.897570   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:12.897948   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:12.897978   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:12.897877   17477 retry.go:31] will retry after 682.058133ms: waiting for machine to come up
	I1207 20:03:13.581721   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:13.582146   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:13.582172   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:13.582114   17477 retry.go:31] will retry after 647.947375ms: waiting for machine to come up
	I1207 20:03:14.231957   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:14.232341   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:14.232371   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:14.232298   17477 retry.go:31] will retry after 921.807402ms: waiting for machine to come up
	I1207 20:03:15.156082   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:15.156492   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:15.156514   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:15.156439   17477 retry.go:31] will retry after 917.331511ms: waiting for machine to come up
	I1207 20:03:16.075774   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:16.076196   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:16.076219   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:16.076154   17477 retry.go:31] will retry after 1.785317885s: waiting for machine to come up
	I1207 20:03:17.862971   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:17.863386   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:17.863439   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:17.863336   17477 retry.go:31] will retry after 2.000089453s: waiting for machine to come up
	I1207 20:03:19.865247   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:19.865730   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:19.865777   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:19.865695   17477 retry.go:31] will retry after 1.882016563s: waiting for machine to come up
	I1207 20:03:21.748909   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:21.749279   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:21.749308   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:21.749231   17477 retry.go:31] will retry after 2.647990369s: waiting for machine to come up
	I1207 20:03:24.398546   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:24.398997   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:24.399026   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:24.398950   17477 retry.go:31] will retry after 3.918549195s: waiting for machine to come up
	I1207 20:03:28.322014   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:28.322529   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find current IP address of domain addons-757601 in network mk-addons-757601
	I1207 20:03:28.322551   17445 main.go:141] libmachine: (addons-757601) DBG | I1207 20:03:28.322495   17477 retry.go:31] will retry after 3.458276187s: waiting for machine to come up
	I1207 20:03:31.783526   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.783979   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has current primary IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.783998   17445 main.go:141] libmachine: (addons-757601) Found IP for machine: 192.168.39.93
	I1207 20:03:31.784013   17445 main.go:141] libmachine: (addons-757601) Reserving static IP address...
	I1207 20:03:31.784385   17445 main.go:141] libmachine: (addons-757601) DBG | unable to find host DHCP lease matching {name: "addons-757601", mac: "52:54:00:e0:35:1c", ip: "192.168.39.93"} in network mk-addons-757601
	I1207 20:03:31.851109   17445 main.go:141] libmachine: (addons-757601) DBG | Getting to WaitForSSH function...
	I1207 20:03:31.851144   17445 main.go:141] libmachine: (addons-757601) Reserved static IP address: 192.168.39.93
	I1207 20:03:31.851163   17445 main.go:141] libmachine: (addons-757601) Waiting for SSH to be available...
	I1207 20:03:31.853468   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.853858   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:31.853880   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.854075   17445 main.go:141] libmachine: (addons-757601) DBG | Using SSH client type: external
	I1207 20:03:31.854100   17445 main.go:141] libmachine: (addons-757601) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa (-rw-------)
	I1207 20:03:31.854227   17445 main.go:141] libmachine: (addons-757601) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:03:31.854251   17445 main.go:141] libmachine: (addons-757601) DBG | About to run SSH command:
	I1207 20:03:31.854265   17445 main.go:141] libmachine: (addons-757601) DBG | exit 0
	I1207 20:03:31.961481   17445 main.go:141] libmachine: (addons-757601) DBG | SSH cmd err, output: <nil>: 
	I1207 20:03:31.961797   17445 main.go:141] libmachine: (addons-757601) KVM machine creation complete!
	I1207 20:03:31.962133   17445 main.go:141] libmachine: (addons-757601) Calling .GetConfigRaw
	I1207 20:03:31.962647   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:31.963007   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:31.963240   17445 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 20:03:31.963254   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:03:31.964469   17445 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 20:03:31.964481   17445 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 20:03:31.964487   17445 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 20:03:31.964494   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:31.966514   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.966806   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:31.966830   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:31.966914   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:31.967090   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:31.967234   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:31.967372   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:31.967499   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:31.967814   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:31.967825   17445 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 20:03:32.101400   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:03:32.101426   17445 main.go:141] libmachine: Detecting the provisioner...
	I1207 20:03:32.101434   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.103789   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.104218   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.104244   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.104426   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:32.104613   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.104768   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.104911   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:32.105059   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:32.105370   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:32.105387   17445 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 20:03:32.238570   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 20:03:32.238668   17445 main.go:141] libmachine: found compatible host: buildroot
	I1207 20:03:32.238683   17445 main.go:141] libmachine: Provisioning with buildroot...
	I1207 20:03:32.238695   17445 main.go:141] libmachine: (addons-757601) Calling .GetMachineName
	I1207 20:03:32.238926   17445 buildroot.go:166] provisioning hostname "addons-757601"
	I1207 20:03:32.238952   17445 main.go:141] libmachine: (addons-757601) Calling .GetMachineName
	I1207 20:03:32.239072   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.241250   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.241558   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.241594   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.241679   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:32.241853   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.242056   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.242204   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:32.242354   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:32.242677   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:32.242691   17445 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-757601 && echo "addons-757601" | sudo tee /etc/hostname
	I1207 20:03:32.388266   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-757601
	
	I1207 20:03:32.388291   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.391033   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.391422   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.391452   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.391571   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:32.391787   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.391903   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.392062   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:32.392192   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:32.392630   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:32.392663   17445 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-757601' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-757601/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-757601' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:03:32.535858   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:03:32.535887   17445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:03:32.535931   17445 buildroot.go:174] setting up certificates
	I1207 20:03:32.535943   17445 provision.go:83] configureAuth start
	I1207 20:03:32.535955   17445 main.go:141] libmachine: (addons-757601) Calling .GetMachineName
	I1207 20:03:32.536212   17445 main.go:141] libmachine: (addons-757601) Calling .GetIP
	I1207 20:03:32.538749   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.539031   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.539052   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.539191   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.541434   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.541725   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.541752   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.541844   17445 provision.go:138] copyHostCerts
	I1207 20:03:32.541941   17445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:03:32.542079   17445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:03:32.542155   17445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:03:32.542215   17445 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.addons-757601 san=[192.168.39.93 192.168.39.93 localhost 127.0.0.1 minikube addons-757601]
	I1207 20:03:32.762196   17445 provision.go:172] copyRemoteCerts
	I1207 20:03:32.762251   17445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:03:32.762273   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.765121   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.765401   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.765426   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.765622   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:32.765796   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.765941   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:32.766055   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:03:32.859392   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:03:32.885364   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 20:03:32.910113   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 20:03:32.935212   17445 provision.go:86] duration metric: configureAuth took 399.248063ms
	I1207 20:03:32.935240   17445 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:03:32.935421   17445 config.go:182] Loaded profile config "addons-757601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:03:32.935518   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:32.938032   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.938383   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:32.938423   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:32.938584   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:32.938741   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.938908   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:32.939053   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:32.939195   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:32.939517   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:32.939533   17445 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:03:33.256127   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:03:33.256159   17445 main.go:141] libmachine: Checking connection to Docker...
	I1207 20:03:33.256187   17445 main.go:141] libmachine: (addons-757601) Calling .GetURL
	I1207 20:03:33.257292   17445 main.go:141] libmachine: (addons-757601) DBG | Using libvirt version 6000000
	I1207 20:03:33.259543   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.259900   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.259924   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.260039   17445 main.go:141] libmachine: Docker is up and running!
	I1207 20:03:33.260067   17445 main.go:141] libmachine: Reticulating splines...
	I1207 20:03:33.260073   17445 client.go:171] LocalClient.Create took 23.878057944s
	I1207 20:03:33.260092   17445 start.go:167] duration metric: libmachine.API.Create for "addons-757601" took 23.878106691s
	I1207 20:03:33.260107   17445 start.go:300] post-start starting for "addons-757601" (driver="kvm2")
	I1207 20:03:33.260118   17445 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:03:33.260138   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:33.260373   17445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:03:33.260400   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:33.262653   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.262987   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.263016   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.263168   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:33.263352   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:33.263521   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:33.263673   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:03:33.358967   17445 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:03:33.363146   17445 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:03:33.363166   17445 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:03:33.363242   17445 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:03:33.363265   17445 start.go:303] post-start completed in 103.151166ms
	I1207 20:03:33.363297   17445 main.go:141] libmachine: (addons-757601) Calling .GetConfigRaw
	I1207 20:03:33.363850   17445 main.go:141] libmachine: (addons-757601) Calling .GetIP
	I1207 20:03:33.366420   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.366731   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.366754   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.366909   17445 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/config.json ...
	I1207 20:03:33.367060   17445 start.go:128] duration metric: createHost completed in 24.003081529s
	I1207 20:03:33.367079   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:33.369010   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.369292   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.369329   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.369420   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:33.369612   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:33.369755   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:33.369854   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:33.369969   17445 main.go:141] libmachine: Using SSH client type: native
	I1207 20:03:33.370318   17445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.93 22 <nil> <nil>}
	I1207 20:03:33.370335   17445 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:03:33.502693   17445 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701979413.482936102
	
	I1207 20:03:33.502714   17445 fix.go:206] guest clock: 1701979413.482936102
	I1207 20:03:33.502724   17445 fix.go:219] Guest: 2023-12-07 20:03:33.482936102 +0000 UTC Remote: 2023-12-07 20:03:33.367069793 +0000 UTC m=+24.119808173 (delta=115.866309ms)
	I1207 20:03:33.502758   17445 fix.go:190] guest clock delta is within tolerance: 115.866309ms
	I1207 20:03:33.502763   17445 start.go:83] releasing machines lock for "addons-757601", held for 24.138890637s
	I1207 20:03:33.502785   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:33.503033   17445 main.go:141] libmachine: (addons-757601) Calling .GetIP
	I1207 20:03:33.506267   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.506614   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.506644   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.506815   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:33.507242   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:33.507393   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:03:33.507488   17445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:03:33.507538   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:33.507673   17445 ssh_runner.go:195] Run: cat /version.json
	I1207 20:03:33.507699   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:03:33.510122   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.510450   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.510477   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.510505   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.510588   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:33.510755   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:33.510858   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:33.510877   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:33.510881   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:33.511055   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:03:33.511122   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:03:33.511201   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:03:33.511297   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:03:33.511387   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:03:33.627687   17445 ssh_runner.go:195] Run: systemctl --version
	I1207 20:03:33.632968   17445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:03:33.791956   17445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 20:03:33.798750   17445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:03:33.798823   17445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:03:33.812506   17445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:03:33.812528   17445 start.go:475] detecting cgroup driver to use...
	I1207 20:03:33.812586   17445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:03:33.827930   17445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:03:33.839073   17445 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:03:33.839135   17445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:03:33.850880   17445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:03:33.862416   17445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:03:33.959048   17445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:03:34.075827   17445 docker.go:219] disabling docker service ...
	I1207 20:03:34.075896   17445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:03:34.089165   17445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:03:34.101273   17445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:03:34.206739   17445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:03:34.305327   17445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:03:34.317766   17445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:03:34.334373   17445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:03:34.334432   17445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:03:34.343573   17445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:03:34.343638   17445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:03:34.352556   17445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:03:34.361232   17445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:03:34.369910   17445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:03:34.379024   17445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:03:34.387210   17445 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:03:34.387252   17445 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:03:34.398806   17445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:03:34.407606   17445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:03:34.532130   17445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:03:34.697313   17445 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:03:34.697412   17445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:03:34.702673   17445 start.go:543] Will wait 60s for crictl version
	I1207 20:03:34.702738   17445 ssh_runner.go:195] Run: which crictl
	I1207 20:03:34.706054   17445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:03:34.743589   17445 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:03:34.743728   17445 ssh_runner.go:195] Run: crio --version
	I1207 20:03:34.791583   17445 ssh_runner.go:195] Run: crio --version
	I1207 20:03:34.835610   17445 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:03:34.837036   17445 main.go:141] libmachine: (addons-757601) Calling .GetIP
	I1207 20:03:34.839843   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:34.840185   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:03:34.840215   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:03:34.840427   17445 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:03:34.844370   17445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:03:34.855952   17445 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:03:34.856028   17445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:03:34.889353   17445 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 20:03:34.889439   17445 ssh_runner.go:195] Run: which lz4
	I1207 20:03:34.893251   17445 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:03:34.897281   17445 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:03:34.897305   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 20:03:36.744063   17445 crio.go:444] Took 1.850840 seconds to copy over tarball
	I1207 20:03:36.744129   17445 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:03:39.848791   17445 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.104641484s)
	I1207 20:03:39.848822   17445 crio.go:451] Took 3.104738 seconds to extract the tarball
	I1207 20:03:39.848830   17445 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:03:39.890849   17445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:03:39.959367   17445 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 20:03:39.959390   17445 cache_images.go:84] Images are preloaded, skipping loading
	I1207 20:03:39.959472   17445 ssh_runner.go:195] Run: crio config
	I1207 20:03:40.012635   17445 cni.go:84] Creating CNI manager for ""
	I1207 20:03:40.012656   17445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:03:40.012675   17445 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:03:40.012693   17445 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-757601 NodeName:addons-757601 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:03:40.012815   17445 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-757601"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:03:40.012915   17445 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-757601 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-757601 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:03:40.012968   17445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:03:40.023094   17445 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:03:40.023168   17445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:03:40.032161   17445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 20:03:40.047381   17445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:03:40.062210   17445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1207 20:03:40.077849   17445 ssh_runner.go:195] Run: grep 192.168.39.93	control-plane.minikube.internal$ /etc/hosts
	I1207 20:03:40.082378   17445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:03:40.093141   17445 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601 for IP: 192.168.39.93
	I1207 20:03:40.093172   17445 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.093314   17445 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:03:40.434621   17445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt ...
	I1207 20:03:40.434655   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt: {Name:mkaae60584b3963019b9e388b3306cbac811b7b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.434827   17445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key ...
	I1207 20:03:40.434839   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key: {Name:mkec32cded1bf638649ec619d0e2cd05c79ec531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.434916   17445 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:03:40.556954   17445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt ...
	I1207 20:03:40.556987   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt: {Name:mkdc0b012c148cc6bb05c00f248605a4a7b18a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.557163   17445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key ...
	I1207 20:03:40.557175   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key: {Name:mk45871c7842ca178aa101aea82bca25ac538121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.557288   17445 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.key
	I1207 20:03:40.557304   17445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt with IP's: []
	I1207 20:03:40.798339   17445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt ...
	I1207 20:03:40.798372   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: {Name:mk5ad89a6f643defe335cbe42d255a386bae2513 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.798525   17445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.key ...
	I1207 20:03:40.798536   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.key: {Name:mkeb0ddc7a0bc5db8103d3845c2b30244dbbbaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.798599   17445 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key.1c9ae3b6
	I1207 20:03:40.798615   17445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt.1c9ae3b6 with IP's: [192.168.39.93 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 20:03:40.869823   17445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt.1c9ae3b6 ...
	I1207 20:03:40.869852   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt.1c9ae3b6: {Name:mk1d03b94ca34df1e91a2ba9adcb6ce88934fee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.870029   17445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key.1c9ae3b6 ...
	I1207 20:03:40.870048   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key.1c9ae3b6: {Name:mkcd628e05ccae63ef3139f9cfe9a72349a39487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:40.870145   17445 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt.1c9ae3b6 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt
	I1207 20:03:40.870229   17445 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key.1c9ae3b6 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key
	I1207 20:03:40.870309   17445 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.key
	I1207 20:03:40.870327   17445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.crt with IP's: []
	I1207 20:03:41.159598   17445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.crt ...
	I1207 20:03:41.159629   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.crt: {Name:mk62e4a35dba2da733c578ddf8ab1e88e8c91e1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:41.159816   17445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.key ...
	I1207 20:03:41.159829   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.key: {Name:mk2c245fa7a0870360d8f260534bb51a03004908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:03:41.160013   17445 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:03:41.160049   17445 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:03:41.160071   17445 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:03:41.160094   17445 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:03:41.160667   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:03:41.183577   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 20:03:41.204700   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:03:41.225695   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 20:03:41.246315   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:03:41.267213   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:03:41.288036   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:03:41.308813   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:03:41.329974   17445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:03:41.350462   17445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:03:41.365379   17445 ssh_runner.go:195] Run: openssl version
	I1207 20:03:41.370888   17445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:03:41.381360   17445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:03:41.386148   17445 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:03:41.386201   17445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:03:41.391872   17445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:03:41.402769   17445 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:03:41.406975   17445 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:03:41.407023   17445 kubeadm.go:404] StartCluster: {Name:addons-757601 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-757601 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:03:41.407101   17445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 20:03:41.407163   17445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:03:41.443829   17445 cri.go:89] found id: ""
	I1207 20:03:41.443919   17445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:03:41.453695   17445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:03:41.463185   17445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:03:41.472488   17445 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:03:41.472532   17445 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 20:03:41.662749   17445 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:03:53.647306   17445 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 20:03:53.647358   17445 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 20:03:53.647495   17445 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:03:53.647630   17445 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:03:53.647774   17445 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:03:53.647859   17445 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:03:53.649861   17445 out.go:204]   - Generating certificates and keys ...
	I1207 20:03:53.649967   17445 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 20:03:53.650048   17445 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 20:03:53.650139   17445 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:03:53.650209   17445 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:03:53.650285   17445 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 20:03:53.650352   17445 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 20:03:53.650418   17445 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 20:03:53.650579   17445 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-757601 localhost] and IPs [192.168.39.93 127.0.0.1 ::1]
	I1207 20:03:53.650646   17445 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 20:03:53.650807   17445 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-757601 localhost] and IPs [192.168.39.93 127.0.0.1 ::1]
	I1207 20:03:53.650889   17445 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:03:53.650970   17445 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:03:53.651023   17445 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 20:03:53.651112   17445 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:03:53.651181   17445 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:03:53.651255   17445 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:03:53.651336   17445 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:03:53.651393   17445 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:03:53.651482   17445 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:03:53.651539   17445 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:03:53.653296   17445 out.go:204]   - Booting up control plane ...
	I1207 20:03:53.653363   17445 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:03:53.653420   17445 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:03:53.653474   17445 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:03:53.653564   17445 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:03:53.653643   17445 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:03:53.653676   17445 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 20:03:53.653827   17445 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:03:53.653940   17445 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002894 seconds
	I1207 20:03:53.654086   17445 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:03:53.654258   17445 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:03:53.654329   17445 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:03:53.654516   17445 kubeadm.go:322] [mark-control-plane] Marking the node addons-757601 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 20:03:53.654564   17445 kubeadm.go:322] [bootstrap-token] Using token: 2c4hwp.w6vvf1zyxl3bgf12
	I1207 20:03:53.655999   17445 out.go:204]   - Configuring RBAC rules ...
	I1207 20:03:53.656071   17445 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:03:53.656134   17445 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:03:53.656239   17445 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:03:53.656415   17445 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:03:53.656589   17445 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:03:53.656696   17445 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:03:53.656799   17445 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:03:53.656836   17445 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 20:03:53.656898   17445 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 20:03:53.656910   17445 kubeadm.go:322] 
	I1207 20:03:53.656989   17445 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 20:03:53.656998   17445 kubeadm.go:322] 
	I1207 20:03:53.657095   17445 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 20:03:53.657107   17445 kubeadm.go:322] 
	I1207 20:03:53.657130   17445 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 20:03:53.657178   17445 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:03:53.657220   17445 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:03:53.657225   17445 kubeadm.go:322] 
	I1207 20:03:53.657268   17445 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 20:03:53.657273   17445 kubeadm.go:322] 
	I1207 20:03:53.657326   17445 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 20:03:53.657332   17445 kubeadm.go:322] 
	I1207 20:03:53.657373   17445 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 20:03:53.657439   17445 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:03:53.657494   17445 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:03:53.657499   17445 kubeadm.go:322] 
	I1207 20:03:53.657564   17445 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:03:53.657642   17445 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 20:03:53.657648   17445 kubeadm.go:322] 
	I1207 20:03:53.657712   17445 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2c4hwp.w6vvf1zyxl3bgf12 \
	I1207 20:03:53.657794   17445 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 20:03:53.657813   17445 kubeadm.go:322] 	--control-plane 
	I1207 20:03:53.657818   17445 kubeadm.go:322] 
	I1207 20:03:53.657911   17445 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:03:53.657931   17445 kubeadm.go:322] 
	I1207 20:03:53.658032   17445 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2c4hwp.w6vvf1zyxl3bgf12 \
	I1207 20:03:53.658156   17445 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:03:53.658170   17445 cni.go:84] Creating CNI manager for ""
	I1207 20:03:53.658178   17445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:03:53.659844   17445 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 20:03:53.661215   17445 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 20:03:53.693308   17445 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 20:03:53.715754   17445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:03:53.715817   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:53.715835   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=addons-757601 minikube.k8s.io/updated_at=2023_12_07T20_03_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:53.765645   17445 ops.go:34] apiserver oom_adj: -16
	I1207 20:03:53.941628   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:54.034657   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:54.619897   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:55.120589   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:55.619965   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:56.120485   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:56.620292   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:57.120779   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:57.620032   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:58.120790   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:58.619965   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:59.120132   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:03:59.620271   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:00.119718   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:00.619959   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:01.119734   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:01.619891   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:02.120677   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:02.620359   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:03.120485   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:03.620087   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:04.120612   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:04.620292   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:05.119782   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:05.620333   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:06.120061   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:06.620495   17445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:04:06.799157   17445 kubeadm.go:1088] duration metric: took 13.083393911s to wait for elevateKubeSystemPrivileges.
	I1207 20:04:06.799199   17445 kubeadm.go:406] StartCluster complete in 25.392178232s
	I1207 20:04:06.799220   17445 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:04:06.799361   17445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:04:06.799855   17445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:04:06.800099   17445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:04:06.800180   17445 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1207 20:04:06.800310   17445 addons.go:69] Setting helm-tiller=true in profile "addons-757601"
	I1207 20:04:06.800329   17445 addons.go:69] Setting volumesnapshots=true in profile "addons-757601"
	I1207 20:04:06.800340   17445 config.go:182] Loaded profile config "addons-757601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:04:06.800356   17445 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-757601"
	I1207 20:04:06.800358   17445 addons.go:69] Setting ingress=true in profile "addons-757601"
	I1207 20:04:06.800367   17445 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-757601"
	I1207 20:04:06.800361   17445 addons.go:69] Setting registry=true in profile "addons-757601"
	I1207 20:04:06.800375   17445 addons.go:231] Setting addon ingress=true in "addons-757601"
	I1207 20:04:06.800382   17445 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-757601"
	I1207 20:04:06.800343   17445 addons.go:231] Setting addon helm-tiller=true in "addons-757601"
	I1207 20:04:06.800392   17445 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-757601"
	I1207 20:04:06.800403   17445 addons.go:69] Setting gcp-auth=true in profile "addons-757601"
	I1207 20:04:06.800412   17445 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-757601"
	I1207 20:04:06.800411   17445 addons.go:69] Setting default-storageclass=true in profile "addons-757601"
	I1207 20:04:06.800421   17445 addons.go:69] Setting ingress-dns=true in profile "addons-757601"
	I1207 20:04:06.800441   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800443   17445 addons.go:69] Setting inspektor-gadget=true in profile "addons-757601"
	I1207 20:04:06.800450   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800450   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800453   17445 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-757601"
	I1207 20:04:06.800460   17445 addons.go:231] Setting addon inspektor-gadget=true in "addons-757601"
	I1207 20:04:06.800497   17445 addons.go:231] Setting addon ingress-dns=true in "addons-757601"
	I1207 20:04:06.800525   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800550   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800867   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800871   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800879   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800357   17445 addons.go:69] Setting storage-provisioner=true in profile "addons-757601"
	I1207 20:04:06.800889   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800895   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800900   17445 addons.go:231] Setting addon storage-provisioner=true in "addons-757601"
	I1207 20:04:06.800901   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800925   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800940   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.800959   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.801024   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.801059   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800394   17445 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-757601"
	I1207 20:04:06.801231   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.801269   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.801299   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.801586   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.801617   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800896   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.800339   17445 addons.go:69] Setting metrics-server=true in profile "addons-757601"
	I1207 20:04:06.801852   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.801857   17445 addons.go:231] Setting addon metrics-server=true in "addons-757601"
	I1207 20:04:06.800349   17445 addons.go:69] Setting cloud-spanner=true in profile "addons-757601"
	I1207 20:04:06.801896   17445 addons.go:231] Setting addon cloud-spanner=true in "addons-757601"
	I1207 20:04:06.800383   17445 addons.go:231] Setting addon registry=true in "addons-757601"
	I1207 20:04:06.801989   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.802023   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.802320   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.802346   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.802390   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.802429   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800348   17445 addons.go:231] Setting addon volumesnapshots=true in "addons-757601"
	I1207 20:04:06.806676   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.807044   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.807070   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800434   17445 mustload.go:65] Loading cluster: addons-757601
	I1207 20:04:06.807437   17445 config.go:182] Loaded profile config "addons-757601": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:04:06.807778   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.807818   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800914   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.800914   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.801944   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.814448   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.814472   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.821376   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I1207 20:04:06.821812   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.822463   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.822485   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.822846   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.823453   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.823492   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.824824   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37351
	I1207 20:04:06.825291   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.825751   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.825766   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.826592   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.826790   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.829836   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1207 20:04:06.830135   17445 addons.go:231] Setting addon default-storageclass=true in "addons-757601"
	I1207 20:04:06.830174   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.830144   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.830570   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.830606   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.830637   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.830653   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.830980   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.831580   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.831615   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.837477   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1207 20:04:06.839694   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I1207 20:04:06.839850   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1207 20:04:06.839856   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I1207 20:04:06.840071   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I1207 20:04:06.840151   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.840213   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.840464   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.840660   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.840672   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.840680   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.840685   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.840815   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.841158   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.841178   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.841218   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.841726   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.841769   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.841975   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.842117   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.842135   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.842523   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.842554   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.842724   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.842757   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I1207 20:04:06.843003   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.843256   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.843295   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.843329   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.843343   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.843593   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.844129   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.844166   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.848419   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I1207 20:04:06.848461   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.848429   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.848682   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.848862   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.848974   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.848993   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.849293   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.849311   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.849371   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.849901   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.850119   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.850649   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.850606   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.850823   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.851022   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.851388   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.851415   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.857980   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I1207 20:04:06.858308   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.858818   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.858836   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.859180   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.859692   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.859726   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.863233   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I1207 20:04:06.863606   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.864245   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.864263   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.864621   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.864799   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.866681   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.869215   17445 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1207 20:04:06.867417   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I1207 20:04:06.869827   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I1207 20:04:06.870828   17445 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 20:04:06.870845   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 20:04:06.870863   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.871210   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.873300   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.873702   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.873715   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.874861   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.874889   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.874917   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.874937   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.874963   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.874967   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.874990   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.875123   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.875139   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.875283   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.875558   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.876090   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.876290   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.876789   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.883160   17445 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1207 20:04:06.882328   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I1207 20:04:06.884450   17445 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1207 20:04:06.884465   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1207 20:04:06.884483   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.883009   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.886429   17445 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1207 20:04:06.884992   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.887782   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.889209   17445 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:04:06.888249   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.888413   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.888688   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.892099   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I1207 20:04:06.893656   17445 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:04:06.892234   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.892253   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.892357   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.892452   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44803
	I1207 20:04:06.892517   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.895787   17445 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 20:04:06.895811   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1207 20:04:06.895831   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.895919   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.896576   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.896611   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.896722   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.896742   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.896754   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.896801   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.896812   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.896965   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.897505   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.897682   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.898378   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.898623   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.899827   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.899864   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.901654   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.901703   17445 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1207 20:04:06.900405   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.900566   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.901486   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.902471   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I1207 20:04:06.904059   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I1207 20:04:06.904151   17445 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 20:04:06.904221   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.904253   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I1207 20:04:06.904307   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.905862   17445 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1207 20:04:06.905978   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I1207 20:04:06.906063   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1207 20:04:06.906271   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.906440   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.906474   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.906636   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.907188   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I1207 20:04:06.907373   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I1207 20:04:06.907446   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 20:04:06.908765   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 20:04:06.908788   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.908790   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 20:04:06.908851   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.907727   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.908898   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.908910   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.907746   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.908038   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.908942   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.908900   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.908380   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.908063   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.910535   17445 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1207 20:04:06.910547   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 20:04:06.910563   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.909260   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.909291   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.909339   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.910758   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.909513   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.910799   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.909558   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.909664   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.911138   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.911475   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.911497   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.912022   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.912044   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.912216   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.912280   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.912478   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.912646   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.913451   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.913547   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.913566   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.914612   17445 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-757601"
	I1207 20:04:06.914656   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:06.915026   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.915054   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.915286   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.915347   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.915510   17445 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-757601" context rescaled to 1 replicas
	I1207 20:04:06.915535   17445 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:04:06.917563   17445 out.go:177] * Verifying Kubernetes components...
	I1207 20:04:06.919422   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.919464   17445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:04:06.917618   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.919556   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.919581   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.916001   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.921277   17445 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:04:06.916659   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.916808   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.915993   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I1207 20:04:06.917951   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.918928   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.919083   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.919602   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.924625   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 20:04:06.922917   17445 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:04:06.922939   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.923024   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.923437   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.923460   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.923719   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.923870   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1207 20:04:06.923931   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.926008   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.927492   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 20:04:06.926152   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:04:06.926284   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.926314   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.926711   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.926765   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.926863   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.929032   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.929070   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.930844   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 20:04:06.929670   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.929694   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.929937   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.929963   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.930317   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.932210   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.934182   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 20:04:06.932818   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.933001   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.933437   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.933569   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.934660   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I1207 20:04:06.937067   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 20:04:06.934961   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I1207 20:04:06.935657   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.935942   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.937021   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.937190   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.937534   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I1207 20:04:06.940281   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 20:04:06.939227   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.941673   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 20:04:06.939453   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.939643   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.939756   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.939284   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.940434   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.941719   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.942198   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.943160   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.943164   17445 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 20:04:06.944724   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 20:04:06.942687   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.944766   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.943493   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.943600   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.943839   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.944747   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 20:04:06.944917   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.945005   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.945072   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.946451   17445 out.go:177]   - Using image docker.io/registry:2.8.3
	I1207 20:04:06.947794   17445 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1207 20:04:06.946748   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.949334   17445 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 20:04:06.949353   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1207 20:04:06.945542   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.949369   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.945362   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:06.949441   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:06.947648   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I1207 20:04:06.951054   17445 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1207 20:04:06.948407   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.948995   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.949812   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.951758   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.952473   17445 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 20:04:06.952488   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 20:04:06.952506   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.952673   17445 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:04:06.952688   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:04:06.952704   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.952709   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.952668   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.952749   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.952843   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.952986   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.953496   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.953518   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.954013   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.954553   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.954580   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.954919   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.955073   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.955204   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.955340   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.955399   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.955655   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.956481   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.956499   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.956938   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.956961   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.956960   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.956981   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.957152   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.957323   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.957330   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.957491   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.957515   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.957612   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.957630   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.957742   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.957791   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:06.959741   17445 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1207 20:04:06.961050   17445 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1207 20:04:06.961071   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1207 20:04:06.961090   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.963593   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.963973   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.964001   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.964150   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.964287   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.964397   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.964504   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	W1207 20:04:06.965554   17445 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60118->192.168.39.93:22: read: connection reset by peer
	I1207 20:04:06.965581   17445 retry.go:31] will retry after 281.443033ms: ssh: handshake failed: read tcp 192.168.39.1:60118->192.168.39.93:22: read: connection reset by peer
	I1207 20:04:06.966182   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I1207 20:04:06.966526   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:06.966948   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:06.966970   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:06.967221   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:06.967382   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:06.968602   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:06.970637   17445 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 20:04:06.972185   17445 out.go:177]   - Using image docker.io/busybox:stable
	I1207 20:04:06.973452   17445 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 20:04:06.973467   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 20:04:06.973479   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:06.976270   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.976699   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:06.976725   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:06.976844   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:06.977029   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:06.977174   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:06.977312   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:07.229038   17445 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 20:04:07.229949   17445 node_ready.go:35] waiting up to 6m0s for node "addons-757601" to be "Ready" ...
	I1207 20:04:07.256851   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 20:04:07.259710   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 20:04:07.272584   17445 node_ready.go:49] node "addons-757601" has status "Ready":"True"
	I1207 20:04:07.272604   17445 node_ready.go:38] duration metric: took 42.6287ms waiting for node "addons-757601" to be "Ready" ...
	I1207 20:04:07.272612   17445 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:04:07.275971   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:04:07.283832   17445 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1207 20:04:07.283848   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1207 20:04:07.301753   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:04:07.308950   17445 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 20:04:07.308971   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 20:04:07.313783   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 20:04:07.313806   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 20:04:07.324872   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 20:04:07.325593   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 20:04:07.328036   17445 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 20:04:07.328055   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 20:04:07.331349   17445 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:07.364120   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 20:04:07.493812   17445 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 20:04:07.493834   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 20:04:07.598298   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 20:04:07.598326   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 20:04:07.609222   17445 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1207 20:04:07.609242   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1207 20:04:07.616150   17445 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 20:04:07.616167   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 20:04:07.628807   17445 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 20:04:07.628826   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 20:04:07.669843   17445 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 20:04:07.669864   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 20:04:07.678791   17445 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1207 20:04:07.678810   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1207 20:04:07.768405   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 20:04:07.768428   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 20:04:07.769730   17445 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1207 20:04:07.769753   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1207 20:04:07.815939   17445 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 20:04:07.815969   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 20:04:07.915138   17445 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1207 20:04:07.915165   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1207 20:04:07.929420   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 20:04:07.945328   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 20:04:07.945347   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 20:04:07.993882   17445 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 20:04:07.993908   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 20:04:08.030351   17445 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1207 20:04:08.030371   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1207 20:04:08.036911   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1207 20:04:08.061147   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 20:04:08.061169   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 20:04:08.064124   17445 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 20:04:08.064142   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 20:04:08.087251   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 20:04:08.117291   17445 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:04:08.117320   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 20:04:08.127518   17445 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1207 20:04:08.127534   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1207 20:04:08.160558   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 20:04:08.160576   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 20:04:08.228223   17445 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1207 20:04:08.228252   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1207 20:04:08.251535   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 20:04:08.251554   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 20:04:08.252219   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:04:08.331582   17445 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 20:04:08.331603   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1207 20:04:08.335230   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 20:04:08.335250   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 20:04:08.396300   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 20:04:08.396321   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 20:04:08.402410   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 20:04:08.462854   17445 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 20:04:08.462883   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 20:04:08.505994   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 20:04:09.506018   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:10.704907   17445 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.475826501s)
	I1207 20:04:10.704942   17445 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1207 20:04:11.300923   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.044005632s)
	I1207 20:04:11.300983   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:11.300995   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:11.301296   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:11.301363   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:11.301380   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:11.301399   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:11.301411   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:11.301660   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:11.301686   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:11.517438   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:12.395941   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.13619007s)
	I1207 20:04:12.395965   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.119964165s)
	I1207 20:04:12.395995   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:12.396013   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:12.395995   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:12.396100   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:12.396260   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:12.396279   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:12.396289   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:12.396298   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:12.396376   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:12.396400   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:12.396408   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:12.396417   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:12.396435   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:12.396755   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:12.396762   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:12.396776   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:12.396778   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:12.396802   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:12.396811   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:12.665279   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:12.665304   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:12.665620   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:12.665663   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:12.665672   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:13.575155   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.273365546s)
	I1207 20:04:13.575205   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:13.575217   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:13.575705   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:13.575711   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:13.575725   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:13.575737   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:13.575745   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:13.575975   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:13.576025   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:13.576035   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:13.576577   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:13.594666   17445 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 20:04:13.594699   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:13.597824   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:13.598282   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:13.598319   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:13.598431   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:13.598624   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:13.598770   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:13.598940   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:13.809904   17445 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 20:04:13.897371   17445 addons.go:231] Setting addon gcp-auth=true in "addons-757601"
	I1207 20:04:13.897434   17445 host.go:66] Checking if "addons-757601" exists ...
	I1207 20:04:13.897900   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:13.897959   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:13.913617   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1207 20:04:13.914054   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:13.914521   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:13.914541   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:13.914903   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:13.915391   17445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:04:13.915414   17445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:04:13.930109   17445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I1207 20:04:13.930540   17445 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:04:13.931176   17445 main.go:141] libmachine: Using API Version  1
	I1207 20:04:13.931204   17445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:04:13.931522   17445 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:04:13.931703   17445 main.go:141] libmachine: (addons-757601) Calling .GetState
	I1207 20:04:13.933443   17445 main.go:141] libmachine: (addons-757601) Calling .DriverName
	I1207 20:04:13.933668   17445 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 20:04:13.933689   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHHostname
	I1207 20:04:13.936825   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:13.937306   17445 main.go:141] libmachine: (addons-757601) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:35:1c", ip: ""} in network mk-addons-757601: {Iface:virbr1 ExpiryTime:2023-12-07 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e0:35:1c Iaid: IPaddr:192.168.39.93 Prefix:24 Hostname:addons-757601 Clientid:01:52:54:00:e0:35:1c}
	I1207 20:04:13.937335   17445 main.go:141] libmachine: (addons-757601) DBG | domain addons-757601 has defined IP address 192.168.39.93 and MAC address 52:54:00:e0:35:1c in network mk-addons-757601
	I1207 20:04:13.937486   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHPort
	I1207 20:04:13.937663   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHKeyPath
	I1207 20:04:13.937803   17445 main.go:141] libmachine: (addons-757601) Calling .GetSSHUsername
	I1207 20:04:13.937900   17445 sshutil.go:53] new ssh client: &{IP:192.168.39.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/addons-757601/id_rsa Username:docker}
	I1207 20:04:15.490712   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.165807423s)
	I1207 20:04:15.490757   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.165126677s)
	I1207 20:04:15.490794   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.490816   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.490831   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.126681324s)
	I1207 20:04:15.490763   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.490866   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.490911   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.561455146s)
	I1207 20:04:15.490942   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.490963   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.490993   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.454048679s)
	I1207 20:04:15.491015   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.490866   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.491030   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.491037   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.491121   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.403844s)
	I1207 20:04:15.491144   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.491155   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.491207   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.491249   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.491260   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.491269   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.491277   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.491395   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.239147893s)
	W1207 20:04:15.491420   17445 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 20:04:15.491441   17445 retry.go:31] will retry after 264.50906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 20:04:15.491511   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.089077392s)
	I1207 20:04:15.491527   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.491536   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.491628   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.491707   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.491717   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.491726   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.491734   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.493214   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493228   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.493240   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.493249   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.493302   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.493335   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493341   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.493345   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.493355   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.493360   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.493364   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.493384   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493392   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.493401   17445 addons.go:467] Verifying addon ingress=true in "addons-757601"
	I1207 20:04:15.493419   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493436   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.493446   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.493455   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.493484   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.495103   17445 out.go:177] * Verifying ingress addon...
	I1207 20:04:15.493509   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493516   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493536   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.493556   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.493827   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.493830   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.494317   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.494355   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.495139   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.495152   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.495174   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.495164   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.495186   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.495272   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.495176   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.495235   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.495778   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.497710   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.497754   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.497774   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.494409   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.498248   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.498263   17445 addons.go:467] Verifying addon metrics-server=true in "addons-757601"
	I1207 20:04:15.495251   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.494429   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.496012   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.498359   17445 addons.go:467] Verifying addon registry=true in "addons-757601"
	I1207 20:04:15.499888   17445 out.go:177] * Verifying registry addon...
	I1207 20:04:15.498601   17445 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 20:04:15.496045   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.502676   17445 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 20:04:15.513125   17445 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 20:04:15.513145   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:15.523010   17445 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 20:04:15.523027   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:15.554615   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:15.558881   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:15.563061   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:15.563084   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:15.563322   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:15.563357   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:15.563373   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:15.668447   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:15.756922   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:04:16.269404   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:16.270248   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:16.361596   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.855551268s)
	I1207 20:04:16.361628   17445 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.42794045s)
	I1207 20:04:16.361650   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:16.361664   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:16.364135   17445 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:04:16.361973   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:16.362009   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:16.364190   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:16.364207   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:16.364219   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:16.364466   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:16.364503   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:16.366210   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:16.366213   17445 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1207 20:04:16.366220   17445 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-757601"
	I1207 20:04:16.367584   17445 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 20:04:16.367602   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 20:04:16.369094   17445 out.go:177] * Verifying csi-hostpath-driver addon...
	I1207 20:04:16.371923   17445 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 20:04:16.440857   17445 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 20:04:16.440890   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 20:04:16.492514   17445 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 20:04:16.492533   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:16.541147   17445 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 20:04:16.541167   17445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1207 20:04:16.557290   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:16.612565   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:16.613469   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:16.645229   17445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 20:04:17.070562   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:17.092884   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:17.100079   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:17.706989   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:17.707008   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:17.713330   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:17.735245   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:18.062444   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:18.067293   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:18.067514   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:18.594523   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:18.605148   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:18.605778   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:18.769722   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.012732714s)
	I1207 20:04:18.769780   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:18.769797   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:18.770150   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:18.770171   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:18.770209   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:18.770225   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:18.770238   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:18.770450   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:18.770468   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:19.037577   17445 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.392310411s)
	I1207 20:04:19.037630   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:19.037643   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:19.037942   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:19.038007   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:19.038033   17445 main.go:141] libmachine: Making call to close driver server
	I1207 20:04:19.037970   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:19.038045   17445 main.go:141] libmachine: (addons-757601) Calling .Close
	I1207 20:04:19.038388   17445 main.go:141] libmachine: (addons-757601) DBG | Closing plugin on server side
	I1207 20:04:19.038435   17445 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:04:19.038453   17445 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:04:19.039909   17445 addons.go:467] Verifying addon gcp-auth=true in "addons-757601"
	I1207 20:04:19.041823   17445 out.go:177] * Verifying gcp-auth addon...
	I1207 20:04:19.044630   17445 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 20:04:19.061305   17445 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 20:04:19.061325   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:19.078389   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:19.123607   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:19.123762   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:19.124185   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:19.564084   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:19.580523   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:19.586590   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:19.637724   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:20.059651   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:20.063222   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:20.066334   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:20.133932   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:20.160386   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:20.560168   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:20.566359   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:20.568907   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:20.627524   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:21.059433   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:21.063875   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:21.067229   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:21.128234   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:21.560407   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:21.564461   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:21.567536   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:21.628279   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:22.061534   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:22.066313   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:22.072509   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:22.127756   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:22.160622   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:22.559331   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:22.563621   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:22.571029   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:22.627699   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:23.059232   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:23.063363   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:23.065984   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:23.128575   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:23.560364   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:23.569998   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:23.598119   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:23.628219   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:24.078568   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:24.085785   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:24.086843   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:24.152924   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:24.165750   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:24.562920   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:24.563656   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:24.565800   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:24.628015   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:25.061130   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:25.067120   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:25.069265   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:25.130871   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:25.568865   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:25.574080   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:25.576214   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:25.628096   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:26.077272   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:26.093774   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:26.097239   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:26.134141   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:26.170720   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:26.559383   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:26.566459   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:26.571167   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:26.631659   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:27.064111   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:27.087302   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:27.089129   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:27.131224   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:27.570549   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:27.608279   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:27.608391   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:27.632251   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:28.065548   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:28.069452   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:28.073015   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:28.140156   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:28.560241   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:28.568359   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:28.568743   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:28.629322   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:28.662826   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:29.069600   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:29.075169   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:29.085120   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:29.145185   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:29.559680   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:29.562861   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:29.565847   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:29.628086   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:30.060567   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:30.066142   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:30.068537   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:30.128567   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:30.562360   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:30.564816   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:30.566392   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:30.628335   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:31.060444   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:31.069086   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:31.070588   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:31.127946   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:31.160245   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:31.559941   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:31.567103   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:31.584847   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:31.647585   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:32.063700   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:32.089965   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:32.090758   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:32.135355   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:32.560102   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:32.568212   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:32.573681   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:32.638984   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:33.076173   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:33.077465   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:33.082701   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:33.127692   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:33.564136   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:33.565134   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:33.569890   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:33.628882   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:33.660162   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:34.059494   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:34.064239   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:34.067322   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:34.128381   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:34.559241   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:34.565703   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:34.565833   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:34.638665   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:35.152252   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:35.155169   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:35.155213   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:35.156089   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:35.562172   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:35.564122   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:35.565918   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:35.628452   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:35.662651   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:36.062626   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:36.064404   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:36.064767   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:36.128188   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:36.561782   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:36.563813   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:36.565463   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:36.627772   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:37.066661   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:37.067125   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:37.069009   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:37.130684   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:37.560760   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:37.564776   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:37.566336   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:37.627328   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:38.059421   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:38.067995   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:38.068418   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:38.128354   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:38.159941   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:38.560687   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:38.566873   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:38.567841   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:38.629353   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:39.061645   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:39.065954   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:39.068354   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:39.131153   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:39.560036   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:39.564425   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:39.566050   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:39.628669   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:40.061522   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:40.064225   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:40.066970   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:40.127592   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:40.162369   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:40.559328   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:40.564722   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:40.564892   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:40.627969   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:41.060068   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:41.063729   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:41.064470   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:41.128394   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:41.560770   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:41.564657   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:41.568163   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:41.627488   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:42.062096   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:42.066672   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:42.066685   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:42.128867   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:42.560540   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:42.563777   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:42.565649   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:42.627558   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:42.659686   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:43.059233   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:43.062786   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:43.064385   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:43.127834   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:43.560058   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:43.564121   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:43.564331   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:43.628227   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:44.064759   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:44.065499   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:44.068077   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:44.127343   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:44.561252   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:44.564404   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:44.567963   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:44.629104   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:45.117107   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:45.119464   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:45.119949   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:45.121131   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:45.144948   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:45.561880   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:45.563553   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:45.564895   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:45.627711   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:46.062157   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:46.067951   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:46.070368   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:46.129403   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:46.559200   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:46.564425   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:46.564501   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:46.628943   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:47.064489   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:47.067811   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:47.070590   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:47.127916   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:47.160720   17445 pod_ready.go:102] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:04:47.559452   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:47.569057   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:47.571679   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:47.627334   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:47.661612   17445 pod_ready.go:92] pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:47.661632   17445 pod_ready.go:81] duration metric: took 40.33026147s waiting for pod "coredns-5dd5756b68-bn9s7" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.661641   17445 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dvxw2" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.664768   17445 pod_ready.go:97] error getting pod "coredns-5dd5756b68-dvxw2" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-dvxw2" not found
	I1207 20:04:47.664785   17445 pod_ready.go:81] duration metric: took 3.13878ms waiting for pod "coredns-5dd5756b68-dvxw2" in "kube-system" namespace to be "Ready" ...
	E1207 20:04:47.664795   17445 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-dvxw2" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-dvxw2" not found
	I1207 20:04:47.664800   17445 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.670759   17445 pod_ready.go:92] pod "etcd-addons-757601" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:47.670773   17445 pod_ready.go:81] duration metric: took 5.967915ms waiting for pod "etcd-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.670780   17445 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.675306   17445 pod_ready.go:92] pod "kube-apiserver-addons-757601" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:47.675320   17445 pod_ready.go:81] duration metric: took 4.534169ms waiting for pod "kube-apiserver-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.675327   17445 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.684076   17445 pod_ready.go:92] pod "kube-controller-manager-addons-757601" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:47.684090   17445 pod_ready.go:81] duration metric: took 8.756686ms waiting for pod "kube-controller-manager-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.684097   17445 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pndw8" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.858792   17445 pod_ready.go:92] pod "kube-proxy-pndw8" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:47.858816   17445 pod_ready.go:81] duration metric: took 174.711939ms waiting for pod "kube-proxy-pndw8" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:47.858867   17445 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:48.071124   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:48.071201   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:48.072617   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:48.130129   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:48.259647   17445 pod_ready.go:92] pod "kube-scheduler-addons-757601" in "kube-system" namespace has status "Ready":"True"
	I1207 20:04:48.259669   17445 pod_ready.go:81] duration metric: took 400.790727ms waiting for pod "kube-scheduler-addons-757601" in "kube-system" namespace to be "Ready" ...
	I1207 20:04:48.259678   17445 pod_ready.go:38] duration metric: took 40.987056686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:04:48.259695   17445 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:04:48.259749   17445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:04:48.300496   17445 api_server.go:72] duration metric: took 41.384933592s to wait for apiserver process to appear ...
	I1207 20:04:48.300519   17445 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:04:48.300534   17445 api_server.go:253] Checking apiserver healthz at https://192.168.39.93:8443/healthz ...
	I1207 20:04:48.305623   17445 api_server.go:279] https://192.168.39.93:8443/healthz returned 200:
	ok
	I1207 20:04:48.306869   17445 api_server.go:141] control plane version: v1.28.4
	I1207 20:04:48.306905   17445 api_server.go:131] duration metric: took 6.381057ms to wait for apiserver health ...
	I1207 20:04:48.306912   17445 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:04:48.529325   17445 system_pods.go:59] 18 kube-system pods found
	I1207 20:04:48.529353   17445 system_pods.go:61] "coredns-5dd5756b68-bn9s7" [361178d4-583f-46ed-aaaa-54331bde2a34] Running
	I1207 20:04:48.529361   17445 system_pods.go:61] "csi-hostpath-attacher-0" [8fb9c5b0-5c25-4bf0-a50a-dd70da680b6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 20:04:48.529366   17445 system_pods.go:61] "csi-hostpath-resizer-0" [b68a397b-b39d-409d-acfd-381617659ad1] Running
	I1207 20:04:48.529375   17445 system_pods.go:61] "csi-hostpathplugin-xgf2f" [d4ac0b62-5927-42fa-a03c-3fe363623547] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 20:04:48.529380   17445 system_pods.go:61] "etcd-addons-757601" [76822484-6018-4992-9692-ca25514ba920] Running
	I1207 20:04:48.529385   17445 system_pods.go:61] "kube-apiserver-addons-757601" [e70ae217-9c8b-454a-ab32-4f939c55c27d] Running
	I1207 20:04:48.529389   17445 system_pods.go:61] "kube-controller-manager-addons-757601" [d6b73667-d731-48a8-aa2b-24027d24d3b9] Running
	I1207 20:04:48.529398   17445 system_pods.go:61] "kube-ingress-dns-minikube" [fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3] Running
	I1207 20:04:48.529466   17445 system_pods.go:61] "kube-proxy-pndw8" [11b0dbc0-9367-41dc-baab-b4f9e89a95c3] Running
	I1207 20:04:48.529487   17445 system_pods.go:61] "kube-scheduler-addons-757601" [67854c9e-2f2a-42be-a56e-9ff1a498ed35] Running
	I1207 20:04:48.529500   17445 system_pods.go:61] "metrics-server-7c66d45ddc-8m6ck" [d9124562-9981-4a1d-9b4c-3e26b6ebe070] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 20:04:48.529513   17445 system_pods.go:61] "nvidia-device-plugin-daemonset-5m6r5" [d2c991f1-e7f9-47bd-b82e-f542c0dd79cd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 20:04:48.529528   17445 system_pods.go:61] "registry-proxy-k2cft" [4fb405a9-9156-489a-82b8-dc52261e2365] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 20:04:48.529541   17445 system_pods.go:61] "registry-s82w5" [e0e2ee17-ea9d-4ffa-b6db-8b3ed128a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 20:04:48.529550   17445 system_pods.go:61] "snapshot-controller-58dbcc7b99-ht48m" [1bb137b3-e403-4c25-b1a5-37fe4e7aab93] Running
	I1207 20:04:48.529559   17445 system_pods.go:61] "snapshot-controller-58dbcc7b99-nnrth" [33bb14e4-82e4-42b0-8176-748df209f719] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 20:04:48.529566   17445 system_pods.go:61] "storage-provisioner" [693122e7-b001-4675-8de7-403dee50ca9f] Running
	I1207 20:04:48.529571   17445 system_pods.go:61] "tiller-deploy-7b677967b9-9q4z5" [ab057003-3d2f-4282-a60e-3ed01033c5e4] Running
	I1207 20:04:48.529580   17445 system_pods.go:74] duration metric: took 222.662404ms to wait for pod list to return data ...
	I1207 20:04:48.529587   17445 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:04:48.560889   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:48.564775   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:48.573807   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:48.627901   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:48.657463   17445 default_sa.go:45] found service account: "default"
	I1207 20:04:48.657486   17445 default_sa.go:55] duration metric: took 127.89168ms for default service account to be created ...
	I1207 20:04:48.657494   17445 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:04:48.865658   17445 system_pods.go:86] 18 kube-system pods found
	I1207 20:04:48.865684   17445 system_pods.go:89] "coredns-5dd5756b68-bn9s7" [361178d4-583f-46ed-aaaa-54331bde2a34] Running
	I1207 20:04:48.865693   17445 system_pods.go:89] "csi-hostpath-attacher-0" [8fb9c5b0-5c25-4bf0-a50a-dd70da680b6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 20:04:48.865698   17445 system_pods.go:89] "csi-hostpath-resizer-0" [b68a397b-b39d-409d-acfd-381617659ad1] Running
	I1207 20:04:48.865706   17445 system_pods.go:89] "csi-hostpathplugin-xgf2f" [d4ac0b62-5927-42fa-a03c-3fe363623547] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 20:04:48.865710   17445 system_pods.go:89] "etcd-addons-757601" [76822484-6018-4992-9692-ca25514ba920] Running
	I1207 20:04:48.865715   17445 system_pods.go:89] "kube-apiserver-addons-757601" [e70ae217-9c8b-454a-ab32-4f939c55c27d] Running
	I1207 20:04:48.865720   17445 system_pods.go:89] "kube-controller-manager-addons-757601" [d6b73667-d731-48a8-aa2b-24027d24d3b9] Running
	I1207 20:04:48.865725   17445 system_pods.go:89] "kube-ingress-dns-minikube" [fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3] Running
	I1207 20:04:48.865731   17445 system_pods.go:89] "kube-proxy-pndw8" [11b0dbc0-9367-41dc-baab-b4f9e89a95c3] Running
	I1207 20:04:48.865735   17445 system_pods.go:89] "kube-scheduler-addons-757601" [67854c9e-2f2a-42be-a56e-9ff1a498ed35] Running
	I1207 20:04:48.865743   17445 system_pods.go:89] "metrics-server-7c66d45ddc-8m6ck" [d9124562-9981-4a1d-9b4c-3e26b6ebe070] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 20:04:48.865754   17445 system_pods.go:89] "nvidia-device-plugin-daemonset-5m6r5" [d2c991f1-e7f9-47bd-b82e-f542c0dd79cd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 20:04:48.865762   17445 system_pods.go:89] "registry-proxy-k2cft" [4fb405a9-9156-489a-82b8-dc52261e2365] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 20:04:48.865768   17445 system_pods.go:89] "registry-s82w5" [e0e2ee17-ea9d-4ffa-b6db-8b3ed128a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 20:04:48.865775   17445 system_pods.go:89] "snapshot-controller-58dbcc7b99-ht48m" [1bb137b3-e403-4c25-b1a5-37fe4e7aab93] Running
	I1207 20:04:48.865781   17445 system_pods.go:89] "snapshot-controller-58dbcc7b99-nnrth" [33bb14e4-82e4-42b0-8176-748df209f719] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 20:04:48.865788   17445 system_pods.go:89] "storage-provisioner" [693122e7-b001-4675-8de7-403dee50ca9f] Running
	I1207 20:04:48.865792   17445 system_pods.go:89] "tiller-deploy-7b677967b9-9q4z5" [ab057003-3d2f-4282-a60e-3ed01033c5e4] Running
	I1207 20:04:48.865801   17445 system_pods.go:126] duration metric: took 208.301832ms to wait for k8s-apps to be running ...
	I1207 20:04:48.865810   17445 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:04:48.865852   17445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:04:48.894941   17445 system_svc.go:56] duration metric: took 29.123547ms WaitForService to wait for kubelet.
	I1207 20:04:48.894965   17445 kubeadm.go:581] duration metric: took 41.979407201s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:04:48.894984   17445 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:04:49.060028   17445 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:04:49.060068   17445 node_conditions.go:123] node cpu capacity is 2
	I1207 20:04:49.060084   17445 node_conditions.go:105] duration metric: took 165.09512ms to run NodePressure ...
	I1207 20:04:49.060097   17445 start.go:228] waiting for startup goroutines ...
	I1207 20:04:49.063171   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:49.063750   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:49.065945   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:49.128395   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:49.562716   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:49.564994   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:49.566069   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:49.627737   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:50.060295   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:50.066889   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:50.067668   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:50.128009   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:50.560994   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:50.574147   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:50.575697   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:50.636128   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:51.060307   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:51.065370   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:51.071657   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:51.129653   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:51.610645   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:51.610805   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:51.612856   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:51.639802   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:52.060024   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:52.063900   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:52.067917   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:52.127098   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:52.565576   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:52.575555   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:52.576567   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:52.627000   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:53.059430   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:53.063013   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:53.063913   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:53.127419   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:53.560048   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:53.571592   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:53.572122   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:53.637085   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:54.059573   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:54.065975   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:54.066362   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:54.128399   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:54.560819   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:54.563035   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:54.565293   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:54.627595   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:55.060005   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:55.067888   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:55.068132   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:55.127238   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:55.560175   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:55.564519   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:55.571035   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:55.627763   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:56.059867   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:56.066279   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:56.066645   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:56.127548   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:56.560394   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:56.563582   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:56.564075   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:56.628527   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:57.062896   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:57.066995   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:57.067382   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:57.127142   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:57.561050   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:57.565562   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:57.566353   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:57.628247   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:58.334068   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:58.336068   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:58.337082   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:58.337988   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:58.561345   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:58.565206   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:58.569071   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:58.627711   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:59.062263   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:59.078118   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:59.078709   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:59.142737   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:59.560126   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:04:59.562884   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:04:59.564514   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:04:59.627416   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:00.060220   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:00.065678   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:00.066047   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:00.127695   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:00.559324   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:00.564370   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:00.565090   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:00.629022   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:01.070501   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:01.072492   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:01.081202   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:01.127584   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:01.559658   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:01.563979   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:01.564407   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:01.628071   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:02.061800   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:02.070933   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:02.073748   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:02.128614   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:02.559849   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:02.570258   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:02.570321   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:02.627536   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:03.061052   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:03.065447   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:03.070456   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:03.127938   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:03.560095   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:03.566822   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:03.566907   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:03.628127   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:04.060083   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:04.067928   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:04.068375   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:04.131214   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:04.820702   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:04.825316   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:04.826076   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:04.828422   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:05.059802   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:05.065504   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:05.067902   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:05.128407   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:05.560767   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:05.563811   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:05.565004   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:05.627768   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:06.061023   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:06.064329   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:06.066674   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:06.127738   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:06.559578   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:06.565516   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:06.566616   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:06.627371   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:07.059657   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:07.064413   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:07.065540   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:07.127499   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:07.559383   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:07.563947   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:07.564155   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:07.628739   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:08.065298   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:08.074693   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:08.080570   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:08.128023   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:08.560915   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:08.565487   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:08.568057   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:08.627989   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:09.060138   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:09.067998   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:09.068369   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:09.141659   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:09.560004   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:09.566993   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:05:09.567286   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:09.628563   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:10.062834   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:10.064111   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:10.065183   17445 kapi.go:107] duration metric: took 54.56250493s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 20:05:10.128065   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:10.563040   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:10.563311   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:10.628521   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:11.060085   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:11.064064   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:11.128071   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:11.560231   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:11.562517   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:11.629789   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:12.060016   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:12.066821   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:12.127560   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:12.559471   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:12.563473   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:12.627630   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:13.064048   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:13.064280   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:13.128211   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:13.560202   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:13.564263   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:13.628374   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:14.061411   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:14.066397   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:14.127921   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:14.560883   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:14.564778   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:14.627680   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:15.059681   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:15.063250   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:15.129273   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:15.560894   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:15.563891   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:15.631526   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:16.464620   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:16.465417   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:16.465860   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:16.559822   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:16.562871   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:16.627528   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:17.060728   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:17.065777   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:17.127488   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:17.560980   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:17.564402   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:17.627274   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:18.067121   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:18.067367   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:18.127882   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:18.559791   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:18.563613   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:18.628135   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:19.316639   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:19.320994   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:19.324353   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:19.559520   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:19.566891   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:19.627393   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:20.067811   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:20.067882   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:20.129381   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:20.563875   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:20.565303   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:20.630673   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:21.059362   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:21.064398   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:21.128859   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:21.560897   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:21.563691   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:21.633777   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:22.060116   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:22.064552   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:22.128580   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:22.559283   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:22.566130   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:22.628068   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:23.060227   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:23.063534   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:23.128129   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:23.559600   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:23.563735   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:23.627219   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:24.070759   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:24.073070   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:24.129354   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:24.560447   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:24.564969   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:24.630736   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:25.060255   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:25.067831   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:25.139083   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:25.559328   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:25.572585   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:25.627695   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:26.059661   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:26.063656   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:26.133002   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:26.559174   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:26.565035   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:26.627545   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:27.059039   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:27.063354   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:27.127630   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:27.559873   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:27.563238   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:27.628575   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:28.059154   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:28.063213   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:28.127817   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:28.569104   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:28.574992   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:28.628432   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:29.061678   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:29.064476   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:29.127276   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:29.559998   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:29.563144   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:29.628196   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:30.066852   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:30.067058   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:30.127808   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:30.559473   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:30.563531   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:30.627065   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:31.197795   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:31.201008   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:31.201712   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:31.559788   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:31.565213   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:31.631613   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:32.059909   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:32.063249   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:32.128235   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:32.559742   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:32.565703   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:32.628084   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:33.060004   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:33.063854   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:33.127639   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:33.562986   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:33.571464   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:33.628657   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:34.067658   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:34.068386   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:34.128136   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:34.559097   17445 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:05:34.563463   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:34.636151   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:35.061130   17445 kapi.go:107] duration metric: took 1m19.562527476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 20:05:35.063410   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:35.127079   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:35.564118   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:35.633673   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:36.064717   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:36.128805   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:36.623505   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:36.635027   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:37.064312   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:37.133115   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:37.566322   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:37.628693   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:38.066478   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:38.128748   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:38.564019   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:38.628659   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:39.064478   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:39.129096   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:39.564681   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:39.628008   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:40.063606   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:40.127897   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:40.564293   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:40.628638   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:41.069197   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:41.127594   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:41.564829   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:05:41.628650   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:42.063231   17445 kapi.go:107] duration metric: took 1m25.691306757s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 20:05:42.128357   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:42.627618   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:43.131856   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:43.861808   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:44.130124   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:44.627757   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:45.128085   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:45.627916   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:46.128478   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:46.627727   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:47.127995   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:47.628192   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:48.128758   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:48.628256   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:49.128962   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:49.629465   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:50.127955   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:50.627861   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:51.128364   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:51.628301   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:52.128134   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:52.629341   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:53.129150   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:53.628116   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:54.128790   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:54.627239   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:55.128480   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:55.628717   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:56.129337   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:56.630007   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:57.132856   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:57.627474   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:58.128473   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:58.628856   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:59.127787   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:05:59.627997   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:00.127896   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:00.629664   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:01.127556   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:01.628945   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:02.129179   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:02.628355   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:03.128366   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:03.627630   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:04.127616   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:04.629149   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:05.128331   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:05.628422   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:06.128560   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:06.627451   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:07.127489   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:07.627374   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:08.128555   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:08.627949   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:09.127989   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:09.627942   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:10.127610   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:10.627754   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:11.128026   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:11.628592   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:12.128715   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:12.628387   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:13.128616   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:13.627675   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:14.127567   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:14.628189   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:15.128108   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:15.628038   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:16.128781   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:16.627938   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:17.128219   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:17.628356   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:18.128221   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:18.628878   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:19.127690   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:19.627601   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:20.128311   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:20.628038   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:21.128553   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:21.627531   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:22.128686   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:22.627581   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:23.128204   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:23.627899   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:24.127679   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:24.628749   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:25.127771   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:25.628094   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:26.128319   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:26.629094   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:27.128192   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:27.629016   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:28.128658   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:28.628764   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:29.127774   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:29.627906   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:30.128206   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:30.628390   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:31.128580   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:31.627476   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:32.128852   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:32.627990   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:33.127841   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:33.627814   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:34.128279   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:34.628192   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:35.139239   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:35.629262   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:36.127886   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:36.627470   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:37.131414   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:37.628305   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:38.128646   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:38.628381   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:39.128535   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:39.628404   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:40.127255   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:40.628329   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:41.128502   17445 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:06:41.627821   17445 kapi.go:107] duration metric: took 2m22.583181946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 20:06:41.630126   17445 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-757601 cluster.
	I1207 20:06:41.631914   17445 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 20:06:41.633556   17445 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 20:06:41.635168   17445 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1207 20:06:41.636668   17445 addons.go:502] enable addons completed in 2m34.836485957s: enabled=[nvidia-device-plugin ingress-dns default-storageclass storage-provisioner cloud-spanner inspektor-gadget metrics-server helm-tiller storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1207 20:06:41.636704   17445 start.go:233] waiting for cluster config update ...
	I1207 20:06:41.636725   17445 start.go:242] writing updated cluster config ...
	I1207 20:06:41.636966   17445 ssh_runner.go:195] Run: rm -f paused
	I1207 20:06:41.687331   17445 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 20:06:41.689786   17445 out.go:177] * Done! kubectl is now configured to use "addons-757601" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 20:03:22 UTC, ends at Thu 2023-12-07 20:09:46 UTC. --
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.352064804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701979786352048267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=4cd5220b-c693-476f-96db-037929e80dd5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.352536654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ede5867b-0271-4700-a24d-27e0a3728d44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.352672487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ede5867b-0271-4700-a24d-27e0a3728d44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.352988136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c96bb99a8f1e10c0c7a5db3d2ab4a0a577f54f48fd305d86af9b0f6427a353ea,PodSandboxId:0ce9d06f12b1633539d6e969350c342cb89647b9665ccbad3a6d3db21e44d4c0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701979779695032939,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-7n62d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52709a46-015b-4017-9d33-7463e710190e,},Annotations:map[string]string{io.kubernetes.container.hash: bc9da217,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7c301c3f3c9cfadc9c98983ec36f0a9a6e27ebcc295b73893a3c359343db26,PodSandboxId:9a9574d1918c1649a12841b0ee6318b85bac7467922dae4be086b1a46676232c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701979654792073005,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wtvt2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59547b02-2a38-42bc-8e3a-336953be35f5,},An
notations:map[string]string{io.kubernetes.container.hash: a0c3e482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157ffc9d6b349b59895d7cc6e7bb862caa365f449c03d74ba6a4a7b8c3fb98e9,PodSandboxId:58ca04f6c693039fc09a383fc61f649f4d9c05ea72e663211563708981bffe6f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701979636840357748,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 57e0b0f5-d87c-4d24-bab1-4a5e62295828,},Annotations:map[string]string{io.kubernetes.container.hash: b4e7291e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54aa2d27d87910e64398118e5a009b9f3614219707fbdcbd7b93f148649c2458,PodSandboxId:256dfd11be4e4096a5898503cab089f82d7ad03e257899d4b9ea7628c67267ec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701979600096804655,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-km9kn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 65eebacc-b9fc-46d7-91e9-ef907ef2b501,},Annotations:map[string]string{io.kubernetes.container.hash: f63f818d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d0b045e9fa11a90ea40346ef9673344190cc03ae3f60c2e656f4406999dc7d,PodSandboxId:6a159ee43a143ae7f168415a86ad3ab95373a036947cc29855f0a4d9498c2002,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17019795
06735125845,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zvw5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 396200b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e3f45560c6bd53d85fad9d34def223a69fbb74b5652d750db7ed62f2e2dd54,PodSandboxId:6e2ca1c72e80469aca7e22ca8699685601b92abf5a48a5d6b748d7c1eeb00619,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701979499038984874,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdnmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78416b6a-955b-4bb3-b1af-4e2860beadc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2765d046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034388e8b5f14b69a6e8c53313e7ecbcb879964cc1c13b60501b0e0b42a70783,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701979493548282484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b1980789bae782a4306a588eb1581ffa0318c1cd2dbe3332f2a740fee9d94d,PodSandboxId:0d4ddfe182046303e6d7f357982029a5bc828b1de7a353fb8917b0b8d8d2a353,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701979462350296249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pndw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11b0dbc0-9367-41dc-baab-b4f9e89a95c3,},Annotations:map[string]string{io.kubernetes.container.hash: 55afb5be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c46b25530f7209acf4d120d04cac75eb769c1a9923f06bd369675467d61f85,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER
_EXITED,CreatedAt:1701979462079400575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35967cff7a2f7d06b752021954430bc3168ef08d50e1bc9552acd3b16c3c3df,PodSandboxId:5f9dd3c4540a72eb3c9e971238eea7109d8037baff04505d127b06b8cb6dfe59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:170
1979449944206681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bn9s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 361178d4-583f-46ed-aaaa-54331bde2a34,},Annotations:map[string]string{io.kubernetes.container.hash: 438dd847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd3dca5dd0c88eee040471b0bad32a418c6aadffc66ff987313a63e8f96cb0e,PodSandboxId:72d456f61c61c519ae25cec39d9e722d8cd8fc11e6ac05483c16ab79cfe0933a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35
c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701979426877997267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d509039ce8e28f4b9b8b64b95c174039,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3ba3c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795f767b501376594987feeac0dec16a7ecf6685d263752ef8956466aa878dbe,PodSandboxId:2d21cb7b3da5aceefb8d6e7de7f499dec2e1d716306b7e247607ff0a240ca6be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},
ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701979426639148615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190324c177c8f854cce36b306e4f49ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7100e24fb2f5d5ea15fb00b7484eaa91ee1efe71e76eb47854294b9d41f57097,PodSandboxId:abe03b71640d3b573eaa12cf20213da237d119f94b1c03d05b4263f3073fc157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701979426712960237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30cf99096b1e61bc85cf3c849b120c74,},Annotations:map[string]string{io.kubernetes.container.hash: 6a01911a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab7d910920e28c20bb35e32578cada816f528f93eb61578e1d6f5acae415413,PodSandboxId:485b0379193be74f1af1e64645d0d8d751cbf66960cdb62483652cbe9f0429be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s
.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701979426447883131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ffd6321c2e96feb8d923621d412eec,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ede5867b-0271-4700-a24d-27e0a3728d44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.393981154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f32bb5be-f73e-4756-8421-e21cf3361206 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.394063469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f32bb5be-f73e-4756-8421-e21cf3361206 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.395180400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=147a65b1-a448-4119-924d-97c1532382f5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.396554060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701979786396537015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=147a65b1-a448-4119-924d-97c1532382f5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.397331322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d011819a-27ca-44d1-9c3d-21fa7060d1b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.397404643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d011819a-27ca-44d1-9c3d-21fa7060d1b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.397761108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c96bb99a8f1e10c0c7a5db3d2ab4a0a577f54f48fd305d86af9b0f6427a353ea,PodSandboxId:0ce9d06f12b1633539d6e969350c342cb89647b9665ccbad3a6d3db21e44d4c0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701979779695032939,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-7n62d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52709a46-015b-4017-9d33-7463e710190e,},Annotations:map[string]string{io.kubernetes.container.hash: bc9da217,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7c301c3f3c9cfadc9c98983ec36f0a9a6e27ebcc295b73893a3c359343db26,PodSandboxId:9a9574d1918c1649a12841b0ee6318b85bac7467922dae4be086b1a46676232c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701979654792073005,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wtvt2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59547b02-2a38-42bc-8e3a-336953be35f5,},An
notations:map[string]string{io.kubernetes.container.hash: a0c3e482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157ffc9d6b349b59895d7cc6e7bb862caa365f449c03d74ba6a4a7b8c3fb98e9,PodSandboxId:58ca04f6c693039fc09a383fc61f649f4d9c05ea72e663211563708981bffe6f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701979636840357748,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 57e0b0f5-d87c-4d24-bab1-4a5e62295828,},Annotations:map[string]string{io.kubernetes.container.hash: b4e7291e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54aa2d27d87910e64398118e5a009b9f3614219707fbdcbd7b93f148649c2458,PodSandboxId:256dfd11be4e4096a5898503cab089f82d7ad03e257899d4b9ea7628c67267ec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701979600096804655,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-km9kn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 65eebacc-b9fc-46d7-91e9-ef907ef2b501,},Annotations:map[string]string{io.kubernetes.container.hash: f63f818d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d0b045e9fa11a90ea40346ef9673344190cc03ae3f60c2e656f4406999dc7d,PodSandboxId:6a159ee43a143ae7f168415a86ad3ab95373a036947cc29855f0a4d9498c2002,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17019795
06735125845,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zvw5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 396200b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e3f45560c6bd53d85fad9d34def223a69fbb74b5652d750db7ed62f2e2dd54,PodSandboxId:6e2ca1c72e80469aca7e22ca8699685601b92abf5a48a5d6b748d7c1eeb00619,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701979499038984874,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdnmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78416b6a-955b-4bb3-b1af-4e2860beadc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2765d046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034388e8b5f14b69a6e8c53313e7ecbcb879964cc1c13b60501b0e0b42a70783,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701979493548282484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b1980789bae782a4306a588eb1581ffa0318c1cd2dbe3332f2a740fee9d94d,PodSandboxId:0d4ddfe182046303e6d7f357982029a5bc828b1de7a353fb8917b0b8d8d2a353,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701979462350296249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pndw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11b0dbc0-9367-41dc-baab-b4f9e89a95c3,},Annotations:map[string]string{io.kubernetes.container.hash: 55afb5be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c46b25530f7209acf4d120d04cac75eb769c1a9923f06bd369675467d61f85,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER
_EXITED,CreatedAt:1701979462079400575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35967cff7a2f7d06b752021954430bc3168ef08d50e1bc9552acd3b16c3c3df,PodSandboxId:5f9dd3c4540a72eb3c9e971238eea7109d8037baff04505d127b06b8cb6dfe59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:170
1979449944206681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bn9s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 361178d4-583f-46ed-aaaa-54331bde2a34,},Annotations:map[string]string{io.kubernetes.container.hash: 438dd847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd3dca5dd0c88eee040471b0bad32a418c6aadffc66ff987313a63e8f96cb0e,PodSandboxId:72d456f61c61c519ae25cec39d9e722d8cd8fc11e6ac05483c16ab79cfe0933a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35
c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701979426877997267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d509039ce8e28f4b9b8b64b95c174039,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3ba3c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795f767b501376594987feeac0dec16a7ecf6685d263752ef8956466aa878dbe,PodSandboxId:2d21cb7b3da5aceefb8d6e7de7f499dec2e1d716306b7e247607ff0a240ca6be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},
ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701979426639148615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190324c177c8f854cce36b306e4f49ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7100e24fb2f5d5ea15fb00b7484eaa91ee1efe71e76eb47854294b9d41f57097,PodSandboxId:abe03b71640d3b573eaa12cf20213da237d119f94b1c03d05b4263f3073fc157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701979426712960237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30cf99096b1e61bc85cf3c849b120c74,},Annotations:map[string]string{io.kubernetes.container.hash: 6a01911a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab7d910920e28c20bb35e32578cada816f528f93eb61578e1d6f5acae415413,PodSandboxId:485b0379193be74f1af1e64645d0d8d751cbf66960cdb62483652cbe9f0429be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s
.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701979426447883131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ffd6321c2e96feb8d923621d412eec,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d011819a-27ca-44d1-9c3d-21fa7060d1b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.432365781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dd1e3cad-f434-47e5-b4bf-6eb0de1df8df name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.432446220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dd1e3cad-f434-47e5-b4bf-6eb0de1df8df name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.434447437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3a91758d-e9a5-4143-828f-734d99631804 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.436094945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701979786436030617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=3a91758d-e9a5-4143-828f-734d99631804 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.437339138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7ef9854-9e6a-4857-8ffb-c93b634eee1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.437412164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7ef9854-9e6a-4857-8ffb-c93b634eee1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.437804645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c96bb99a8f1e10c0c7a5db3d2ab4a0a577f54f48fd305d86af9b0f6427a353ea,PodSandboxId:0ce9d06f12b1633539d6e969350c342cb89647b9665ccbad3a6d3db21e44d4c0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701979779695032939,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-7n62d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52709a46-015b-4017-9d33-7463e710190e,},Annotations:map[string]string{io.kubernetes.container.hash: bc9da217,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7c301c3f3c9cfadc9c98983ec36f0a9a6e27ebcc295b73893a3c359343db26,PodSandboxId:9a9574d1918c1649a12841b0ee6318b85bac7467922dae4be086b1a46676232c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701979654792073005,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wtvt2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59547b02-2a38-42bc-8e3a-336953be35f5,},An
notations:map[string]string{io.kubernetes.container.hash: a0c3e482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157ffc9d6b349b59895d7cc6e7bb862caa365f449c03d74ba6a4a7b8c3fb98e9,PodSandboxId:58ca04f6c693039fc09a383fc61f649f4d9c05ea72e663211563708981bffe6f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701979636840357748,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 57e0b0f5-d87c-4d24-bab1-4a5e62295828,},Annotations:map[string]string{io.kubernetes.container.hash: b4e7291e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54aa2d27d87910e64398118e5a009b9f3614219707fbdcbd7b93f148649c2458,PodSandboxId:256dfd11be4e4096a5898503cab089f82d7ad03e257899d4b9ea7628c67267ec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701979600096804655,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-km9kn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 65eebacc-b9fc-46d7-91e9-ef907ef2b501,},Annotations:map[string]string{io.kubernetes.container.hash: f63f818d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d0b045e9fa11a90ea40346ef9673344190cc03ae3f60c2e656f4406999dc7d,PodSandboxId:6a159ee43a143ae7f168415a86ad3ab95373a036947cc29855f0a4d9498c2002,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17019795
06735125845,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zvw5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 396200b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e3f45560c6bd53d85fad9d34def223a69fbb74b5652d750db7ed62f2e2dd54,PodSandboxId:6e2ca1c72e80469aca7e22ca8699685601b92abf5a48a5d6b748d7c1eeb00619,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701979499038984874,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdnmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78416b6a-955b-4bb3-b1af-4e2860beadc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2765d046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034388e8b5f14b69a6e8c53313e7ecbcb879964cc1c13b60501b0e0b42a70783,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701979493548282484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b1980789bae782a4306a588eb1581ffa0318c1cd2dbe3332f2a740fee9d94d,PodSandboxId:0d4ddfe182046303e6d7f357982029a5bc828b1de7a353fb8917b0b8d8d2a353,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701979462350296249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pndw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11b0dbc0-9367-41dc-baab-b4f9e89a95c3,},Annotations:map[string]string{io.kubernetes.container.hash: 55afb5be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c46b25530f7209acf4d120d04cac75eb769c1a9923f06bd369675467d61f85,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER
_EXITED,CreatedAt:1701979462079400575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35967cff7a2f7d06b752021954430bc3168ef08d50e1bc9552acd3b16c3c3df,PodSandboxId:5f9dd3c4540a72eb3c9e971238eea7109d8037baff04505d127b06b8cb6dfe59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:170
1979449944206681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bn9s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 361178d4-583f-46ed-aaaa-54331bde2a34,},Annotations:map[string]string{io.kubernetes.container.hash: 438dd847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd3dca5dd0c88eee040471b0bad32a418c6aadffc66ff987313a63e8f96cb0e,PodSandboxId:72d456f61c61c519ae25cec39d9e722d8cd8fc11e6ac05483c16ab79cfe0933a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35
c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701979426877997267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d509039ce8e28f4b9b8b64b95c174039,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3ba3c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795f767b501376594987feeac0dec16a7ecf6685d263752ef8956466aa878dbe,PodSandboxId:2d21cb7b3da5aceefb8d6e7de7f499dec2e1d716306b7e247607ff0a240ca6be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},
ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701979426639148615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190324c177c8f854cce36b306e4f49ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7100e24fb2f5d5ea15fb00b7484eaa91ee1efe71e76eb47854294b9d41f57097,PodSandboxId:abe03b71640d3b573eaa12cf20213da237d119f94b1c03d05b4263f3073fc157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701979426712960237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30cf99096b1e61bc85cf3c849b120c74,},Annotations:map[string]string{io.kubernetes.container.hash: 6a01911a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab7d910920e28c20bb35e32578cada816f528f93eb61578e1d6f5acae415413,PodSandboxId:485b0379193be74f1af1e64645d0d8d751cbf66960cdb62483652cbe9f0429be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s
.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701979426447883131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ffd6321c2e96feb8d923621d412eec,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7ef9854-9e6a-4857-8ffb-c93b634eee1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.480256322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=591e0fab-333f-4ad4-8bb8-4e5dab47c4ab name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.480340081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=591e0fab-333f-4ad4-8bb8-4e5dab47c4ab name=/runtime.v1.RuntimeService/Version
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.481356492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8937e2df-13c2-4bc1-b174-890375c4c958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.482711888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701979786482693318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543771,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=8937e2df-13c2-4bc1-b174-890375c4c958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.483155020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c17f43c-bf08-400a-ae75-b5441da5e48f name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.483232895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c17f43c-bf08-400a-ae75-b5441da5e48f name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:09:46 addons-757601 crio[713]: time="2023-12-07 20:09:46.483573047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c96bb99a8f1e10c0c7a5db3d2ab4a0a577f54f48fd305d86af9b0f6427a353ea,PodSandboxId:0ce9d06f12b1633539d6e969350c342cb89647b9665ccbad3a6d3db21e44d4c0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701979779695032939,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-7n62d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52709a46-015b-4017-9d33-7463e710190e,},Annotations:map[string]string{io.kubernetes.container.hash: bc9da217,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a7c301c3f3c9cfadc9c98983ec36f0a9a6e27ebcc295b73893a3c359343db26,PodSandboxId:9a9574d1918c1649a12841b0ee6318b85bac7467922dae4be086b1a46676232c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701979654792073005,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wtvt2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 59547b02-2a38-42bc-8e3a-336953be35f5,},An
notations:map[string]string{io.kubernetes.container.hash: a0c3e482,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:157ffc9d6b349b59895d7cc6e7bb862caa365f449c03d74ba6a4a7b8c3fb98e9,PodSandboxId:58ca04f6c693039fc09a383fc61f649f4d9c05ea72e663211563708981bffe6f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701979636840357748,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 57e0b0f5-d87c-4d24-bab1-4a5e62295828,},Annotations:map[string]string{io.kubernetes.container.hash: b4e7291e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54aa2d27d87910e64398118e5a009b9f3614219707fbdcbd7b93f148649c2458,PodSandboxId:256dfd11be4e4096a5898503cab089f82d7ad03e257899d4b9ea7628c67267ec,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701979600096804655,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-km9kn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 65eebacc-b9fc-46d7-91e9-ef907ef2b501,},Annotations:map[string]string{io.kubernetes.container.hash: f63f818d,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d0b045e9fa11a90ea40346ef9673344190cc03ae3f60c2e656f4406999dc7d,PodSandboxId:6a159ee43a143ae7f168415a86ad3ab95373a036947cc29855f0a4d9498c2002,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17019795
06735125845,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zvw5f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 396200b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e3f45560c6bd53d85fad9d34def223a69fbb74b5652d750db7ed62f2e2dd54,PodSandboxId:6e2ca1c72e80469aca7e22ca8699685601b92abf5a48a5d6b748d7c1eeb00619,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701979499038984874,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdnmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78416b6a-955b-4bb3-b1af-4e2860beadc9,},Annotations:map[string]string{io.kubernetes.container.hash: 2765d046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034388e8b5f14b69a6e8c53313e7ecbcb879964cc1c13b60501b0e0b42a70783,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701979493548282484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b1980789bae782a4306a588eb1581ffa0318c1cd2dbe3332f2a740fee9d94d,PodSandboxId:0d4ddfe182046303e6d7f357982029a5bc828b1de7a353fb8917b0b8d8d2a353,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701979462350296249,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pndw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11b0dbc0-9367-41dc-baab-b4f9e89a95c3,},Annotations:map[string]string{io.kubernetes.container.hash: 55afb5be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c46b25530f7209acf4d120d04cac75eb769c1a9923f06bd369675467d61f85,PodSandboxId:e71cbf1a76e45b0b09a1a79aeea4a463467d0d5f9104d2d05bc14fac0a8ed43e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER
_EXITED,CreatedAt:1701979462079400575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693122e7-b001-4675-8de7-403dee50ca9f,},Annotations:map[string]string{io.kubernetes.container.hash: c8ef7cff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35967cff7a2f7d06b752021954430bc3168ef08d50e1bc9552acd3b16c3c3df,PodSandboxId:5f9dd3c4540a72eb3c9e971238eea7109d8037baff04505d127b06b8cb6dfe59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:170
1979449944206681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bn9s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 361178d4-583f-46ed-aaaa-54331bde2a34,},Annotations:map[string]string{io.kubernetes.container.hash: 438dd847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd3dca5dd0c88eee040471b0bad32a418c6aadffc66ff987313a63e8f96cb0e,PodSandboxId:72d456f61c61c519ae25cec39d9e722d8cd8fc11e6ac05483c16ab79cfe0933a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35
c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701979426877997267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d509039ce8e28f4b9b8b64b95c174039,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3ba3c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795f767b501376594987feeac0dec16a7ecf6685d263752ef8956466aa878dbe,PodSandboxId:2d21cb7b3da5aceefb8d6e7de7f499dec2e1d716306b7e247607ff0a240ca6be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},
ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701979426639148615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190324c177c8f854cce36b306e4f49ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7100e24fb2f5d5ea15fb00b7484eaa91ee1efe71e76eb47854294b9d41f57097,PodSandboxId:abe03b71640d3b573eaa12cf20213da237d119f94b1c03d05b4263f3073fc157,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701979426712960237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30cf99096b1e61bc85cf3c849b120c74,},Annotations:map[string]string{io.kubernetes.container.hash: 6a01911a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab7d910920e28c20bb35e32578cada816f528f93eb61578e1d6f5acae415413,PodSandboxId:485b0379193be74f1af1e64645d0d8d751cbf66960cdb62483652cbe9f0429be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s
.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701979426447883131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-757601,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ffd6321c2e96feb8d923621d412eec,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c17f43c-bf08-400a-ae75-b5441da5e48f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c96bb99a8f1e1       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   0ce9d06f12b16       hello-world-app-5d77478584-7n62d
	4a7c301c3f3c9       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   9a9574d1918c1       headlamp-777fd4b855-wtvt2
	157ffc9d6b349       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   58ca04f6c6930       nginx
	54aa2d27d8791       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   256dfd11be4e4       gcp-auth-d4c87556c-km9kn
	86d0b045e9fa1       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             4 minutes ago       Exited              patch                     2                   6a159ee43a143       ingress-nginx-admission-patch-zvw5f
	d4e3f45560c6b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   6e2ca1c72e804       ingress-nginx-admission-create-pdnmc
	034388e8b5f14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   e71cbf1a76e45       storage-provisioner
	d9b1980789bae       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   0d4ddfe182046       kube-proxy-pndw8
	68c46b25530f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Exited              storage-provisioner       0                   e71cbf1a76e45       storage-provisioner
	d35967cff7a2f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   5f9dd3c4540a7       coredns-5dd5756b68-bn9s7
	3fd3dca5dd0c8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   72d456f61c61c       etcd-addons-757601
	7100e24fb2f5d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   abe03b71640d3       kube-apiserver-addons-757601
	795f767b50137       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   2d21cb7b3da5a       kube-scheduler-addons-757601
	eab7d910920e2       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             6 minutes ago       Running             kube-controller-manager   0                   485b0379193be       kube-controller-manager-addons-757601
	
	* 
	* ==> coredns [d35967cff7a2f7d06b752021954430bc3168ef08d50e1bc9552acd3b16c3c3df] <==
	* [INFO] 10.244.0.8:39657 - 44385 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152698s
	[INFO] 10.244.0.8:38105 - 42874 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073538s
	[INFO] 10.244.0.8:38105 - 23167 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077869s
	[INFO] 10.244.0.8:57195 - 39240 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060791s
	[INFO] 10.244.0.8:57195 - 48458 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111459s
	[INFO] 10.244.0.8:38853 - 55985 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065997s
	[INFO] 10.244.0.8:38853 - 47027 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116029s
	[INFO] 10.244.0.8:45376 - 2319 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037675s
	[INFO] 10.244.0.8:45376 - 51212 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000344s
	[INFO] 10.244.0.8:55032 - 5978 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029163s
	[INFO] 10.244.0.8:55032 - 55623 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030622s
	[INFO] 10.244.0.8:49507 - 51304 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040426s
	[INFO] 10.244.0.8:49507 - 61802 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034899s
	[INFO] 10.244.0.8:58023 - 13980 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00002942s
	[INFO] 10.244.0.8:58023 - 41374 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077908s
	[INFO] 10.244.0.20:42364 - 22880 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000328031s
	[INFO] 10.244.0.20:57578 - 53005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193949s
	[INFO] 10.244.0.20:60469 - 40927 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115719s
	[INFO] 10.244.0.20:34717 - 51897 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000058753s
	[INFO] 10.244.0.20:34118 - 36525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128507s
	[INFO] 10.244.0.20:33165 - 38038 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000060621s
	[INFO] 10.244.0.20:45565 - 54344 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000564153s
	[INFO] 10.244.0.20:38794 - 51225 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.00044923s
	[INFO] 10.244.0.24:56295 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000598121s
	[INFO] 10.244.0.24:42272 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012355s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-757601
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-757601
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=addons-757601
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_03_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-757601
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-757601
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:09:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:07:59 +0000   Thu, 07 Dec 2023 20:03:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:07:59 +0000   Thu, 07 Dec 2023 20:03:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:07:59 +0000   Thu, 07 Dec 2023 20:03:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:07:59 +0000   Thu, 07 Dec 2023 20:03:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.93
	  Hostname:    addons-757601
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6a80b9426184e60a351cd9247225dc6
	  System UUID:                e6a80b94-2618-4e60-a351-cd9247225dc6
	  Boot ID:                    75a2649e-0a76-4049-bf40-b233fafd2622
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-7n62d         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  gcp-auth                    gcp-auth-d4c87556c-km9kn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  headlamp                    headlamp-777fd4b855-wtvt2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-5dd5756b68-bn9s7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m40s
	  kube-system                 etcd-addons-757601                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-apiserver-addons-757601             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-controller-manager-addons-757601    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-proxy-pndw8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-scheduler-addons-757601             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m20s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node addons-757601 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node addons-757601 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node addons-757601 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m53s                kubelet          Node addons-757601 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s                kubelet          Node addons-757601 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s                kubelet          Node addons-757601 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m52s                kubelet          Node addons-757601 status is now: NodeReady
	  Normal  RegisteredNode           5m40s                node-controller  Node addons-757601 event: Registered Node addons-757601 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.010404] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.971337] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.105964] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.144152] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.097336] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.220492] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +10.597395] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[  +8.231226] systemd-fstab-generator[1239]: Ignoring "noauto" for root device
	[Dec 7 20:04] kauditd_printk_skb: 59 callbacks suppressed
	[ +12.660623] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.394900] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 7 20:05] kauditd_printk_skb: 22 callbacks suppressed
	[ +14.661327] kauditd_printk_skb: 22 callbacks suppressed
	[Dec 7 20:06] kauditd_printk_skb: 18 callbacks suppressed
	[ +16.559967] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.046497] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 7 20:07] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.679988] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.972406] kauditd_printk_skb: 3 callbacks suppressed
	[ +17.277988] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.214793] kauditd_printk_skb: 16 callbacks suppressed
	[Dec 7 20:09] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [3fd3dca5dd0c88eee040471b0bad32a418c6aadffc66ff987313a63e8f96cb0e] <==
	* {"level":"info","ts":"2023-12-07T20:05:19.315757Z","caller":"traceutil/trace.go:171","msg":"trace[550840796] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1058; }","duration":"192.057547ms","start":"2023-12-07T20:05:19.123467Z","end":"2023-12-07T20:05:19.315525Z","steps":["trace[550840796] 'agreement among raft nodes before linearized reading'  (duration: 186.384867ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:05:25.768415Z","caller":"traceutil/trace.go:171","msg":"trace[267332272] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"109.873894ms","start":"2023-12-07T20:05:25.658523Z","end":"2023-12-07T20:05:25.768396Z","steps":["trace[267332272] 'process raft request'  (duration: 109.460985ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:05:31.188913Z","caller":"traceutil/trace.go:171","msg":"trace[1686922216] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"385.547147ms","start":"2023-12-07T20:05:30.803262Z","end":"2023-12-07T20:05:31.188809Z","steps":["trace[1686922216] 'process raft request'  (duration: 384.896151ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:05:31.189183Z","caller":"traceutil/trace.go:171","msg":"trace[1620626776] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1151; }","duration":"134.4075ms","start":"2023-12-07T20:05:31.053835Z","end":"2023-12-07T20:05:31.188242Z","steps":["trace[1620626776] 'read index received'  (duration: 134.284003ms)","trace[1620626776] 'applied index is now lower than readState.Index'  (duration: 123.084µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T20:05:31.189522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.54073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13861"}
	{"level":"info","ts":"2023-12-07T20:05:31.189642Z","caller":"traceutil/trace.go:171","msg":"trace[1403306826] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1117; }","duration":"135.792819ms","start":"2023-12-07T20:05:31.053787Z","end":"2023-12-07T20:05:31.18958Z","steps":["trace[1403306826] 'agreement among raft nodes before linearized reading'  (duration: 135.498466ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:05:31.189929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.415212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82441"}
	{"level":"info","ts":"2023-12-07T20:05:31.189979Z","caller":"traceutil/trace.go:171","msg":"trace[2020901125] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1117; }","duration":"133.468974ms","start":"2023-12-07T20:05:31.056504Z","end":"2023-12-07T20:05:31.189973Z","steps":["trace[2020901125] 'agreement among raft nodes before linearized reading'  (duration: 133.323256ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:05:31.189756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T20:05:30.803249Z","time spent":"385.802611ms","remote":"127.0.0.1:43252","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1095 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2023-12-07T20:05:43.852253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.76237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10943"}
	{"level":"info","ts":"2023-12-07T20:05:43.852351Z","caller":"traceutil/trace.go:171","msg":"trace[600791815] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1181; }","duration":"228.868162ms","start":"2023-12-07T20:05:43.623467Z","end":"2023-12-07T20:05:43.852335Z","steps":["trace[600791815] 'range keys from in-memory index tree'  (duration: 228.667366ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:05:43.853472Z","caller":"traceutil/trace.go:171","msg":"trace[422757933] linearizableReadLoop","detail":"{readStateIndex:1220; appliedIndex:1219; }","duration":"114.299966ms","start":"2023-12-07T20:05:43.739162Z","end":"2023-12-07T20:05:43.853462Z","steps":["trace[422757933] 'read index received'  (duration: 114.102692ms)","trace[422757933] 'applied index is now lower than readState.Index'  (duration: 196.808µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T20:05:43.853732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.608374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-jrfzg\" ","response":"range_response_count:1 size:9325"}
	{"level":"info","ts":"2023-12-07T20:05:43.853784Z","caller":"traceutil/trace.go:171","msg":"trace[1835609396] range","detail":"{range_begin:/registry/pods/gadget/gadget-jrfzg; range_end:; response_count:1; response_revision:1182; }","duration":"114.667556ms","start":"2023-12-07T20:05:43.739109Z","end":"2023-12-07T20:05:43.853777Z","steps":["trace[1835609396] 'agreement among raft nodes before linearized reading'  (duration: 114.42938ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:05:43.854013Z","caller":"traceutil/trace.go:171","msg":"trace[1320079689] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"130.483499ms","start":"2023-12-07T20:05:43.723517Z","end":"2023-12-07T20:05:43.854Z","steps":["trace[1320079689] 'process raft request'  (duration: 129.841893ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:06:51.647748Z","caller":"traceutil/trace.go:171","msg":"trace[1376543920] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"124.870059ms","start":"2023-12-07T20:06:51.522862Z","end":"2023-12-07T20:06:51.647732Z","steps":["trace[1376543920] 'process raft request'  (duration: 124.4675ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:07:00.116431Z","caller":"traceutil/trace.go:171","msg":"trace[1909588873] transaction","detail":"{read_only:false; response_revision:1438; number_of_response:1; }","duration":"117.886591ms","start":"2023-12-07T20:06:59.998511Z","end":"2023-12-07T20:07:00.116397Z","steps":["trace[1909588873] 'process raft request'  (duration: 117.494588ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:07:08.440361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.168444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14983"}
	{"level":"info","ts":"2023-12-07T20:07:08.440472Z","caller":"traceutil/trace.go:171","msg":"trace[1676344422] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:1493; }","duration":"148.340987ms","start":"2023-12-07T20:07:08.292119Z","end":"2023-12-07T20:07:08.44046Z","steps":["trace[1676344422] 'range keys from in-memory index tree'  (duration: 148.061883ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:07:08.440671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.858275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-12-07T20:07:08.44074Z","caller":"traceutil/trace.go:171","msg":"trace[625174556] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1493; }","duration":"118.931037ms","start":"2023-12-07T20:07:08.321799Z","end":"2023-12-07T20:07:08.44073Z","steps":["trace[625174556] 'range keys from in-memory index tree'  (duration: 118.727191ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:08:08.822878Z","caller":"traceutil/trace.go:171","msg":"trace[1476739310] linearizableReadLoop","detail":"{readStateIndex:1986; appliedIndex:1985; }","duration":"121.085571ms","start":"2023-12-07T20:08:08.701765Z","end":"2023-12-07T20:08:08.822851Z","steps":["trace[1476739310] 'read index received'  (duration: 120.784642ms)","trace[1476739310] 'applied index is now lower than readState.Index'  (duration: 300.379µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T20:08:08.823024Z","caller":"traceutil/trace.go:171","msg":"trace[1840855138] transaction","detail":"{read_only:false; response_revision:1902; number_of_response:1; }","duration":"137.857917ms","start":"2023-12-07T20:08:08.685155Z","end":"2023-12-07T20:08:08.823013Z","steps":["trace[1840855138] 'process raft request'  (duration: 137.527664ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:08:08.823153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.372435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-07T20:08:08.823217Z","caller":"traceutil/trace.go:171","msg":"trace[1384432906] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1902; }","duration":"121.467328ms","start":"2023-12-07T20:08:08.701742Z","end":"2023-12-07T20:08:08.823209Z","steps":["trace[1384432906] 'agreement among raft nodes before linearized reading'  (duration: 121.35296ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [54aa2d27d87910e64398118e5a009b9f3614219707fbdcbd7b93f148649c2458] <==
	* 2023/12/07 20:06:46 Ready to write response ...
	2023/12/07 20:06:48 Ready to marshal response ...
	2023/12/07 20:06:48 Ready to write response ...
	2023/12/07 20:06:48 Ready to marshal response ...
	2023/12/07 20:06:48 Ready to write response ...
	2023/12/07 20:06:48 Ready to marshal response ...
	2023/12/07 20:06:48 Ready to write response ...
	2023/12/07 20:06:51 Ready to marshal response ...
	2023/12/07 20:06:51 Ready to write response ...
	2023/12/07 20:06:54 Ready to marshal response ...
	2023/12/07 20:06:54 Ready to write response ...
	2023/12/07 20:07:06 Ready to marshal response ...
	2023/12/07 20:07:06 Ready to write response ...
	2023/12/07 20:07:14 Ready to marshal response ...
	2023/12/07 20:07:14 Ready to write response ...
	2023/12/07 20:07:28 Ready to marshal response ...
	2023/12/07 20:07:28 Ready to write response ...
	2023/12/07 20:07:28 Ready to marshal response ...
	2023/12/07 20:07:28 Ready to write response ...
	2023/12/07 20:07:28 Ready to marshal response ...
	2023/12/07 20:07:28 Ready to write response ...
	2023/12/07 20:07:30 Ready to marshal response ...
	2023/12/07 20:07:30 Ready to write response ...
	2023/12/07 20:09:36 Ready to marshal response ...
	2023/12/07 20:09:36 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:09:46 up 6 min,  0 users,  load average: 0.37, 1.46, 0.88
	Linux addons-757601 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7100e24fb2f5d5ea15fb00b7484eaa91ee1efe71e76eb47854294b9d41f57097] <==
	* I1207 20:07:15.689508       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1207 20:07:15.711005       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1207 20:07:16.777078       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1207 20:07:28.130242       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.156.38"}
	E1207 20:07:30.436126       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1207 20:07:51.988713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:51.988865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:51.997197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:51.998777       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.033906       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.033974       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.036845       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.036904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.055969       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.056222       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.062036       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.062105       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.076290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.076366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:07:52.083350       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:07:52.083413       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1207 20:07:53.037530       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1207 20:07:53.084036       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1207 20:07:53.097402       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1207 20:09:36.522471       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.169.204"}
	
	* 
	* ==> kube-controller-manager [eab7d910920e28c20bb35e32578cada816f528f93eb61578e1d6f5acae415413] <==
	* W1207 20:08:30.474645       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:08:30.474741       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:08:36.368895       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:08:36.368991       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:08:48.749700       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:08:48.749798       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:09:09.312855       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:09:09.313188       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:09:16.699154       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:09:16.699208       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:09:19.368282       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:09:19.368387       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:09:36.274855       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1207 20:09:36.338416       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-7n62d"
	I1207 20:09:36.368270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="92.959293ms"
	I1207 20:09:36.389976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.566084ms"
	I1207 20:09:36.408352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.303643ms"
	I1207 20:09:36.408828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="128.376µs"
	I1207 20:09:38.412179       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1207 20:09:38.416685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.995µs"
	I1207 20:09:38.423348       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1207 20:09:40.782348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.890095ms"
	I1207 20:09:40.782991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="27.839µs"
	W1207 20:09:42.865231       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:09:42.865292       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [d9b1980789bae782a4306a588eb1581ffa0318c1cd2dbe3332f2a740fee9d94d] <==
	* I1207 20:04:25.374673       1 server_others.go:69] "Using iptables proxy"
	I1207 20:04:25.493533       1 node.go:141] Successfully retrieved node IP: 192.168.39.93
	I1207 20:04:26.345842       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:04:26.345915       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:04:26.384105       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:04:26.384334       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:04:26.384889       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:04:26.385088       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:04:26.389355       1 config.go:188] "Starting service config controller"
	I1207 20:04:26.389420       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:04:26.389455       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:04:26.389470       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:04:26.398858       1 config.go:315] "Starting node config controller"
	I1207 20:04:26.398916       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:04:26.529007       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:04:26.614460       1 shared_informer.go:318] Caches are synced for node config
	I1207 20:04:26.614532       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [795f767b501376594987feeac0dec16a7ecf6685d263752ef8956466aa878dbe] <==
	* W1207 20:03:50.589114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:03:50.597221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 20:03:50.588677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:03:50.597232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 20:03:50.597317       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:03:50.597421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:03:51.459224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:03:51.459327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1207 20:03:51.513034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:03:51.513143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 20:03:51.523470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:03:51.523520       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 20:03:51.534118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 20:03:51.534281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 20:03:51.554899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 20:03:51.555074       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1207 20:03:51.599239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:03:51.599355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:03:51.613220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:03:51.613442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 20:03:51.693505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:03:51.693753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:03:51.803269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:03:51.803409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1207 20:03:52.089025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:03:22 UTC, ends at Thu 2023-12-07 20:09:47 UTC. --
	Dec 07 20:09:36 addons-757601 kubelet[1246]: I1207 20:09:36.358433    1246 memory_manager.go:346] "RemoveStaleState removing state" podUID="6f923ee7-6164-4d7d-8082-a560a56bc8ad" containerName="task-pv-container"
	Dec 07 20:09:36 addons-757601 kubelet[1246]: I1207 20:09:36.358440    1246 memory_manager.go:346] "RemoveStaleState removing state" podUID="b68a397b-b39d-409d-acfd-381617659ad1" containerName="csi-resizer"
	Dec 07 20:09:36 addons-757601 kubelet[1246]: I1207 20:09:36.479483    1246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9564t\" (UniqueName: \"kubernetes.io/projected/52709a46-015b-4017-9d33-7463e710190e-kube-api-access-9564t\") pod \"hello-world-app-5d77478584-7n62d\" (UID: \"52709a46-015b-4017-9d33-7463e710190e\") " pod="default/hello-world-app-5d77478584-7n62d"
	Dec 07 20:09:36 addons-757601 kubelet[1246]: I1207 20:09:36.479556    1246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/52709a46-015b-4017-9d33-7463e710190e-gcp-creds\") pod \"hello-world-app-5d77478584-7n62d\" (UID: \"52709a46-015b-4017-9d33-7463e710190e\") " pod="default/hello-world-app-5d77478584-7n62d"
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.689754    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpr5j\" (UniqueName: \"kubernetes.io/projected/fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3-kube-api-access-bpr5j\") pod \"fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3\" (UID: \"fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3\") "
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.692251    1246 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3-kube-api-access-bpr5j" (OuterVolumeSpecName: "kube-api-access-bpr5j") pod "fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3" (UID: "fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3"). InnerVolumeSpecName "kube-api-access-bpr5j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.742693    1246 scope.go:117] "RemoveContainer" containerID="b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a"
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.772779    1246 scope.go:117] "RemoveContainer" containerID="b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a"
	Dec 07 20:09:37 addons-757601 kubelet[1246]: E1207 20:09:37.773441    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a\": container with ID starting with b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a not found: ID does not exist" containerID="b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a"
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.773515    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a"} err="failed to get container status \"b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a\": rpc error: code = NotFound desc = could not find container \"b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a\": container with ID starting with b8d645337b52ebf837bd13c2dee6c799a92abc03609af5b82193d40df0055c0a not found: ID does not exist"
	Dec 07 20:09:37 addons-757601 kubelet[1246]: I1207 20:09:37.792091    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bpr5j\" (UniqueName: \"kubernetes.io/projected/fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3-kube-api-access-bpr5j\") on node \"addons-757601\" DevicePath \"\""
	Dec 07 20:09:39 addons-757601 kubelet[1246]: I1207 20:09:39.714067    1246 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3" path="/var/lib/kubelet/pods/63f5f0d6-6553-4c2b-b11f-bcb76fc64dc3/volumes"
	Dec 07 20:09:39 addons-757601 kubelet[1246]: I1207 20:09:39.714559    1246 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="78416b6a-955b-4bb3-b1af-4e2860beadc9" path="/var/lib/kubelet/pods/78416b6a-955b-4bb3-b1af-4e2860beadc9/volumes"
	Dec 07 20:09:39 addons-757601 kubelet[1246]: I1207 20:09:39.715069    1246 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3" path="/var/lib/kubelet/pods/fb49d3e9-dfcc-4bd1-baff-a7a1b80ba0c3/volumes"
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.765904    1246 scope.go:117] "RemoveContainer" containerID="bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618"
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.791557    1246 scope.go:117] "RemoveContainer" containerID="bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618"
	Dec 07 20:09:41 addons-757601 kubelet[1246]: E1207 20:09:41.792282    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618\": container with ID starting with bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618 not found: ID does not exist" containerID="bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618"
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.792346    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618"} err="failed to get container status \"bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618\": rpc error: code = NotFound desc = could not find container \"bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618\": container with ID starting with bb6c359d9f5a3fec812eac75d395ca1e28e060dd9749b2ae7f0cbbf20bdca618 not found: ID does not exist"
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.823815    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-webhook-cert\") pod \"4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe\" (UID: \"4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe\") "
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.823861    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z9l7\" (UniqueName: \"kubernetes.io/projected/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-kube-api-access-9z9l7\") pod \"4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe\" (UID: \"4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe\") "
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.826140    1246 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-kube-api-access-9z9l7" (OuterVolumeSpecName: "kube-api-access-9z9l7") pod "4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe" (UID: "4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe"). InnerVolumeSpecName "kube-api-access-9z9l7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.829714    1246 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe" (UID: "4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.924259    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9z9l7\" (UniqueName: \"kubernetes.io/projected/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-kube-api-access-9z9l7\") on node \"addons-757601\" DevicePath \"\""
	Dec 07 20:09:41 addons-757601 kubelet[1246]: I1207 20:09:41.924320    1246 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe-webhook-cert\") on node \"addons-757601\" DevicePath \"\""
	Dec 07 20:09:43 addons-757601 kubelet[1246]: I1207 20:09:43.713953    1246 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe" path="/var/lib/kubelet/pods/4c1a5ee8-33b2-428a-8eef-5d10d2da6ffe/volumes"
	
	* 
	* ==> storage-provisioner [034388e8b5f14b69a6e8c53313e7ecbcb879964cc1c13b60501b0e0b42a70783] <==
	* I1207 20:04:53.745138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:04:53.799020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:04:53.799091       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:04:53.823949       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:04:53.824730       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-757601_ab002585-3902-483d-9d95-bc8d6c58f4c8!
	I1207 20:04:53.830856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c33f9ed1-561c-467f-9cae-eb5a54113b33", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-757601_ab002585-3902-483d-9d95-bc8d6c58f4c8 became leader
	I1207 20:04:53.925807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-757601_ab002585-3902-483d-9d95-bc8d6c58f4c8!
	
	* 
	* ==> storage-provisioner [68c46b25530f7209acf4d120d04cac75eb769c1a9923f06bd369675467d61f85] <==
	* I1207 20:04:23.192806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 20:04:53.194711       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-757601 -n addons-757601
helpers_test.go:261: (dbg) Run:  kubectl --context addons-757601 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (161.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-757601
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-757601: exit status 82 (2m1.003370764s)

                                                
                                                
-- stdout --
	* Stopping node "addons-757601"  ...
	* Stopping node "addons-757601"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-757601" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-757601
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-757601: exit status 11 (21.706662122s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-757601" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-757601
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-757601: exit status 11 (6.142633559s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-757601" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-757601
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-757601: exit status 11 (6.144059453s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-757601" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-393627 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-393627 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.652541081s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-393627 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-393627 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e8b54a24-0cbf-4d6f-83d8-7c9169c1361c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e8b54a24-0cbf-4d6f-83d8-7c9169c1361c] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.025076729s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1207 20:21:05.939414   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:05.944725   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:05.955021   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:05.975275   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:06.015528   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:06.095857   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:06.256271   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:06.576829   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:07.217741   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:08.498595   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:11.059577   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:16.180317   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:26.420822   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:21:41.701530   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:21:46.901355   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:22:09.385087   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-393627 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.146278428s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-393627 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.25
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons disable ingress-dns --alsologtostderr -v=1: (4.707323134s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons disable ingress --alsologtostderr -v=1: (7.583786289s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-393627 -n ingress-addon-legacy-393627
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-393627 logs -n 25: (1.194266605s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-785124 ssh sudo                                               | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port2627841582/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh -- ls                                              | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | -la /mount-9p                                                            |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh sudo                                               | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| mount   | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| image   | functional-785124 image ls                                               | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| image   | functional-785124                                                        | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-785124 ssh findmnt                                            | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| image   | functional-785124                                                        | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC | 07 Dec 23 20:16 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| mount   | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:16 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-785124                                                     | functional-785124           | jenkins | v1.32.0 | 07 Dec 23 20:17 UTC | 07 Dec 23 20:17 UTC |
	| start   | -p ingress-addon-legacy-393627                                           | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:17 UTC | 07 Dec 23 20:19 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                                       |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-393627                                              | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:19 UTC | 07 Dec 23 20:19 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-393627                                              | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:19 UTC | 07 Dec 23 20:19 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-393627                                              | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:19 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-393627 ip                                           | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:22 UTC | 07 Dec 23 20:22 UTC |
	| addons  | ingress-addon-legacy-393627                                              | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:22 UTC | 07 Dec 23 20:22 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-393627                                              | ingress-addon-legacy-393627 | jenkins | v1.32.0 | 07 Dec 23 20:22 UTC | 07 Dec 23 20:22 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:17:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:17:07.622605   25590 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:17:07.622885   25590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:17:07.622895   25590 out.go:309] Setting ErrFile to fd 2...
	I1207 20:17:07.622903   25590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:17:07.623105   25590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:17:07.623709   25590 out.go:303] Setting JSON to false
	I1207 20:17:07.624606   25590 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3574,"bootTime":1701976654,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:17:07.624662   25590 start.go:138] virtualization: kvm guest
	I1207 20:17:07.627103   25590 out.go:177] * [ingress-addon-legacy-393627] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:17:07.629039   25590 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:17:07.630801   25590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:17:07.629047   25590 notify.go:220] Checking for updates...
	I1207 20:17:07.632627   25590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:17:07.634433   25590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:17:07.636115   25590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:17:07.637648   25590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:17:07.639361   25590 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:17:07.674819   25590 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 20:17:07.676645   25590 start.go:298] selected driver: kvm2
	I1207 20:17:07.676662   25590 start.go:902] validating driver "kvm2" against <nil>
	I1207 20:17:07.676673   25590 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:17:07.677328   25590 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:17:07.677405   25590 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:17:07.691702   25590 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:17:07.691755   25590 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:17:07.691947   25590 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:17:07.692008   25590 cni.go:84] Creating CNI manager for ""
	I1207 20:17:07.692020   25590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:17:07.692031   25590 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 20:17:07.692040   25590 start_flags.go:323] config:
	{Name:ingress-addon-legacy-393627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-393627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:17:07.692174   25590 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:17:07.694194   25590 out.go:177] * Starting control plane node ingress-addon-legacy-393627 in cluster ingress-addon-legacy-393627
	I1207 20:17:07.695873   25590 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1207 20:17:08.196383   25590 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1207 20:17:08.196426   25590 cache.go:56] Caching tarball of preloaded images
	I1207 20:17:08.196615   25590 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1207 20:17:08.198747   25590 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1207 20:17:08.200500   25590 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:17:08.316398   25590 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1207 20:17:25.878546   25590 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:17:25.878656   25590 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:17:26.857155   25590 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1207 20:17:26.857506   25590 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/config.json ...
	I1207 20:17:26.857545   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/config.json: {Name:mk1a75b6420b86bab83ca2e751f8ae1708685746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:17:26.857742   25590 start.go:365] acquiring machines lock for ingress-addon-legacy-393627: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:17:26.857789   25590 start.go:369] acquired machines lock for "ingress-addon-legacy-393627" in 27.165µs
	I1207 20:17:26.857811   25590 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-393627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-393627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:17:26.857884   25590 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 20:17:26.860160   25590 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1207 20:17:26.860311   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:17:26.860365   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:17:26.874174   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I1207 20:17:26.874555   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:17:26.875070   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:17:26.875091   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:17:26.875378   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:17:26.875594   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetMachineName
	I1207 20:17:26.875764   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:26.875934   25590 start.go:159] libmachine.API.Create for "ingress-addon-legacy-393627" (driver="kvm2")
	I1207 20:17:26.875960   25590 client.go:168] LocalClient.Create starting
	I1207 20:17:26.875985   25590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 20:17:26.876014   25590 main.go:141] libmachine: Decoding PEM data...
	I1207 20:17:26.876029   25590 main.go:141] libmachine: Parsing certificate...
	I1207 20:17:26.876087   25590 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 20:17:26.876108   25590 main.go:141] libmachine: Decoding PEM data...
	I1207 20:17:26.876119   25590 main.go:141] libmachine: Parsing certificate...
	I1207 20:17:26.876132   25590 main.go:141] libmachine: Running pre-create checks...
	I1207 20:17:26.876141   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .PreCreateCheck
	I1207 20:17:26.876445   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetConfigRaw
	I1207 20:17:26.876793   25590 main.go:141] libmachine: Creating machine...
	I1207 20:17:26.876806   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Create
	I1207 20:17:26.876937   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Creating KVM machine...
	I1207 20:17:26.878185   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found existing default KVM network
	I1207 20:17:26.878836   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:26.878673   25658 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478d0}
	I1207 20:17:26.883690   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | trying to create private KVM network mk-ingress-addon-legacy-393627 192.168.39.0/24...
	I1207 20:17:26.950408   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | private KVM network mk-ingress-addon-legacy-393627 192.168.39.0/24 created
	I1207 20:17:26.950463   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:26.950357   25658 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:17:26.950506   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627 ...
	I1207 20:17:26.950528   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 20:17:26.950551   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 20:17:27.155775   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:27.155652   25658 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa...
	I1207 20:17:27.247031   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:27.246920   25658 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/ingress-addon-legacy-393627.rawdisk...
	I1207 20:17:27.247060   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Writing magic tar header
	I1207 20:17:27.247072   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Writing SSH key tar header
	I1207 20:17:27.247081   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:27.247055   25658 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627 ...
	I1207 20:17:27.247198   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627
	I1207 20:17:27.247223   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 20:17:27.247234   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627 (perms=drwx------)
	I1207 20:17:27.247246   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 20:17:27.247256   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 20:17:27.247266   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 20:17:27.247275   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 20:17:27.247287   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 20:17:27.247294   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Creating domain...
	I1207 20:17:27.247302   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:17:27.247309   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 20:17:27.247318   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 20:17:27.247325   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home/jenkins
	I1207 20:17:27.247333   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Checking permissions on dir: /home
	I1207 20:17:27.247339   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Skipping /home - not owner
	I1207 20:17:27.248331   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) define libvirt domain using xml: 
	I1207 20:17:27.248341   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) <domain type='kvm'>
	I1207 20:17:27.248348   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <name>ingress-addon-legacy-393627</name>
	I1207 20:17:27.248357   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <memory unit='MiB'>4096</memory>
	I1207 20:17:27.248364   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <vcpu>2</vcpu>
	I1207 20:17:27.248376   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <features>
	I1207 20:17:27.248386   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <acpi/>
	I1207 20:17:27.248391   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <apic/>
	I1207 20:17:27.248398   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <pae/>
	I1207 20:17:27.248405   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     
	I1207 20:17:27.248412   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   </features>
	I1207 20:17:27.248420   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <cpu mode='host-passthrough'>
	I1207 20:17:27.248428   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   
	I1207 20:17:27.248433   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   </cpu>
	I1207 20:17:27.248461   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <os>
	I1207 20:17:27.248491   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <type>hvm</type>
	I1207 20:17:27.248517   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <boot dev='cdrom'/>
	I1207 20:17:27.248529   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <boot dev='hd'/>
	I1207 20:17:27.248544   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <bootmenu enable='no'/>
	I1207 20:17:27.248560   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   </os>
	I1207 20:17:27.248574   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   <devices>
	I1207 20:17:27.248588   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <disk type='file' device='cdrom'>
	I1207 20:17:27.248608   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/boot2docker.iso'/>
	I1207 20:17:27.248621   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <target dev='hdc' bus='scsi'/>
	I1207 20:17:27.248634   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <readonly/>
	I1207 20:17:27.248648   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </disk>
	I1207 20:17:27.248661   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <disk type='file' device='disk'>
	I1207 20:17:27.248677   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 20:17:27.248696   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/ingress-addon-legacy-393627.rawdisk'/>
	I1207 20:17:27.248714   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <target dev='hda' bus='virtio'/>
	I1207 20:17:27.248728   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </disk>
	I1207 20:17:27.248740   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <interface type='network'>
	I1207 20:17:27.248754   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <source network='mk-ingress-addon-legacy-393627'/>
	I1207 20:17:27.248764   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <model type='virtio'/>
	I1207 20:17:27.248774   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </interface>
	I1207 20:17:27.248780   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <interface type='network'>
	I1207 20:17:27.248789   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <source network='default'/>
	I1207 20:17:27.248795   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <model type='virtio'/>
	I1207 20:17:27.248801   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </interface>
	I1207 20:17:27.248812   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <serial type='pty'>
	I1207 20:17:27.248821   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <target port='0'/>
	I1207 20:17:27.248828   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </serial>
	I1207 20:17:27.248837   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <console type='pty'>
	I1207 20:17:27.248843   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <target type='serial' port='0'/>
	I1207 20:17:27.248851   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </console>
	I1207 20:17:27.248857   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     <rng model='virtio'>
	I1207 20:17:27.248864   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)       <backend model='random'>/dev/random</backend>
	I1207 20:17:27.248872   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     </rng>
	I1207 20:17:27.248878   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     
	I1207 20:17:27.248885   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)     
	I1207 20:17:27.248891   25590 main.go:141] libmachine: (ingress-addon-legacy-393627)   </devices>
	I1207 20:17:27.248899   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) </domain>
	I1207 20:17:27.248907   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) 
	I1207 20:17:27.253099   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:f6:ed:cc in network default
	I1207 20:17:27.253581   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Ensuring networks are active...
	I1207 20:17:27.253623   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:27.254259   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Ensuring network default is active
	I1207 20:17:27.254522   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Ensuring network mk-ingress-addon-legacy-393627 is active
	I1207 20:17:27.255094   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Getting domain xml...
	I1207 20:17:27.255750   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Creating domain...
	I1207 20:17:28.466843   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Waiting to get IP...
	I1207 20:17:28.467540   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:28.467813   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:28.467851   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:28.467788   25658 retry.go:31] will retry after 229.298721ms: waiting for machine to come up
	I1207 20:17:28.699354   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:28.699758   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:28.699786   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:28.699718   25658 retry.go:31] will retry after 350.608218ms: waiting for machine to come up
	I1207 20:17:29.052053   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.052525   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.052605   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:29.052467   25658 retry.go:31] will retry after 309.270306ms: waiting for machine to come up
	I1207 20:17:29.362987   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.363592   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.363625   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:29.363535   25658 retry.go:31] will retry after 414.965477ms: waiting for machine to come up
	I1207 20:17:29.780022   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.780393   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:29.780430   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:29.780337   25658 retry.go:31] will retry after 461.392828ms: waiting for machine to come up
	I1207 20:17:30.242840   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:30.243185   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:30.243215   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:30.243138   25658 retry.go:31] will retry after 625.059624ms: waiting for machine to come up
	I1207 20:17:30.869754   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:30.870229   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:30.870254   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:30.870178   25658 retry.go:31] will retry after 990.456305ms: waiting for machine to come up
	I1207 20:17:31.862408   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:31.862986   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:31.863008   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:31.862930   25658 retry.go:31] will retry after 1.029372974s: waiting for machine to come up
	I1207 20:17:32.893507   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:32.893947   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:32.893979   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:32.893868   25658 retry.go:31] will retry after 1.807045768s: waiting for machine to come up
	I1207 20:17:34.702126   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:34.702569   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:34.702600   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:34.702513   25658 retry.go:31] will retry after 1.970089737s: waiting for machine to come up
	I1207 20:17:36.675703   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:36.676110   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:36.676126   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:36.676080   25658 retry.go:31] will retry after 2.364612679s: waiting for machine to come up
	I1207 20:17:39.043099   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:39.043536   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:39.043557   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:39.043485   25658 retry.go:31] will retry after 3.557840899s: waiting for machine to come up
	I1207 20:17:42.603550   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:42.603906   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:42.603935   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:42.603854   25658 retry.go:31] will retry after 3.267552644s: waiting for machine to come up
	I1207 20:17:45.875184   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:45.875576   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find current IP address of domain ingress-addon-legacy-393627 in network mk-ingress-addon-legacy-393627
	I1207 20:17:45.875601   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | I1207 20:17:45.875527   25658 retry.go:31] will retry after 4.673696264s: waiting for machine to come up
	I1207 20:17:50.553473   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.553896   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has current primary IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.553911   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Found IP for machine: 192.168.39.25
	I1207 20:17:50.553946   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Reserving static IP address...
	I1207 20:17:50.554347   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-393627", mac: "52:54:00:3d:d8:28", ip: "192.168.39.25"} in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.625108   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Getting to WaitForSSH function...
	I1207 20:17:50.625142   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Reserved static IP address: 192.168.39.25
	I1207 20:17:50.625160   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Waiting for SSH to be available...
	I1207 20:17:50.627756   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.628167   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:50.628197   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.628360   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Using SSH client type: external
	I1207 20:17:50.628386   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa (-rw-------)
	I1207 20:17:50.628421   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:17:50.628440   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | About to run SSH command:
	I1207 20:17:50.628457   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | exit 0
	I1207 20:17:50.761605   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | SSH cmd err, output: <nil>: 
	I1207 20:17:50.761832   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) KVM machine creation complete!
	I1207 20:17:50.762193   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetConfigRaw
	I1207 20:17:50.762695   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:50.762883   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:50.763001   25590 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 20:17:50.763017   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetState
	I1207 20:17:50.764161   25590 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 20:17:50.764174   25590 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 20:17:50.764181   25590 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 20:17:50.764187   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:50.766267   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.766584   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:50.766618   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.766727   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:50.766915   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:50.767031   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:50.767163   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:50.767291   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:50.767704   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:50.767721   25590 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 20:17:50.893104   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:17:50.893124   25590 main.go:141] libmachine: Detecting the provisioner...
	I1207 20:17:50.893132   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:50.895794   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.896069   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:50.896115   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:50.896310   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:50.896504   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:50.896656   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:50.896811   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:50.896961   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:50.897345   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:50.897358   25590 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 20:17:51.022390   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 20:17:51.022489   25590 main.go:141] libmachine: found compatible host: buildroot
	I1207 20:17:51.022504   25590 main.go:141] libmachine: Provisioning with buildroot...
	I1207 20:17:51.022518   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetMachineName
	I1207 20:17:51.022748   25590 buildroot.go:166] provisioning hostname "ingress-addon-legacy-393627"
	I1207 20:17:51.022777   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetMachineName
	I1207 20:17:51.022959   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.025533   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.025947   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.025987   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.026143   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:51.026304   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.026454   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.026610   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:51.026754   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:51.027075   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:51.027089   25590 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-393627 && echo "ingress-addon-legacy-393627" | sudo tee /etc/hostname
	I1207 20:17:51.166909   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-393627
	
	I1207 20:17:51.166939   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.169843   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.170220   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.170255   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.170413   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:51.170619   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.170791   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.170915   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:51.171095   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:51.171415   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:51.171435   25590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-393627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-393627/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-393627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:17:51.301483   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:17:51.301511   25590 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:17:51.301542   25590 buildroot.go:174] setting up certificates
	I1207 20:17:51.301553   25590 provision.go:83] configureAuth start
	I1207 20:17:51.301563   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetMachineName
	I1207 20:17:51.301845   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetIP
	I1207 20:17:51.304432   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.304778   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.304804   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.304931   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.307035   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.307288   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.307319   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.307486   25590 provision.go:138] copyHostCerts
	I1207 20:17:51.307512   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:17:51.307539   25590 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:17:51.307547   25590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:17:51.307611   25590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:17:51.307722   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:17:51.307751   25590 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:17:51.307757   25590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:17:51.307794   25590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:17:51.307851   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:17:51.307866   25590 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:17:51.307873   25590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:17:51.307895   25590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:17:51.307968   25590 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-393627 san=[192.168.39.25 192.168.39.25 localhost 127.0.0.1 minikube ingress-addon-legacy-393627]
	I1207 20:17:51.475299   25590 provision.go:172] copyRemoteCerts
	I1207 20:17:51.475357   25590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:17:51.475379   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.477843   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.478149   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.478181   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.478315   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:51.478493   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.478613   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:51.478718   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:17:51.571632   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:17:51.571715   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:17:51.593513   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:17:51.593576   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 20:17:51.614979   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:17:51.615039   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:17:51.636037   25590 provision.go:86] duration metric: configureAuth took 334.469309ms
	I1207 20:17:51.636064   25590 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:17:51.636270   25590 config.go:182] Loaded profile config "ingress-addon-legacy-393627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1207 20:17:51.636358   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.639034   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.639387   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.639428   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.639591   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:51.639789   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.639964   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.640113   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:51.640331   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:51.640635   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:51.640650   25590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:17:51.947387   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:17:51.947417   25590 main.go:141] libmachine: Checking connection to Docker...
	I1207 20:17:51.947432   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetURL
	I1207 20:17:51.948865   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Using libvirt version 6000000
	I1207 20:17:51.951168   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.951516   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.951535   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.951735   25590 main.go:141] libmachine: Docker is up and running!
	I1207 20:17:51.951751   25590 main.go:141] libmachine: Reticulating splines...
	I1207 20:17:51.951759   25590 client.go:171] LocalClient.Create took 25.075790467s
	I1207 20:17:51.951783   25590 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-393627" took 25.075849138s
	I1207 20:17:51.951794   25590 start.go:300] post-start starting for "ingress-addon-legacy-393627" (driver="kvm2")
	I1207 20:17:51.951805   25590 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:17:51.951826   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:51.952081   25590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:17:51.952111   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:51.954257   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.954610   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:51.954639   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:51.954734   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:51.954903   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:51.955048   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:51.955204   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:17:52.046849   25590 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:17:52.051396   25590 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:17:52.051423   25590 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:17:52.051505   25590 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:17:52.051575   25590 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:17:52.051584   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:17:52.051690   25590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:17:52.060543   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:17:52.083357   25590 start.go:303] post-start completed in 131.553217ms
	I1207 20:17:52.083403   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetConfigRaw
	I1207 20:17:52.084019   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetIP
	I1207 20:17:52.086410   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.086740   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:52.086772   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.086984   25590 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/config.json ...
	I1207 20:17:52.087208   25590 start.go:128] duration metric: createHost completed in 25.229313722s
	I1207 20:17:52.087249   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:52.089537   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.089847   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:52.089954   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.090083   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:52.090250   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:52.090423   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:52.090578   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:52.090749   25590 main.go:141] libmachine: Using SSH client type: native
	I1207 20:17:52.091065   25590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I1207 20:17:52.091079   25590 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:17:52.214452   25590 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701980272.185604559
	
	I1207 20:17:52.214473   25590 fix.go:206] guest clock: 1701980272.185604559
	I1207 20:17:52.214483   25590 fix.go:219] Guest: 2023-12-07 20:17:52.185604559 +0000 UTC Remote: 2023-12-07 20:17:52.08722191 +0000 UTC m=+44.512787611 (delta=98.382649ms)
	I1207 20:17:52.214516   25590 fix.go:190] guest clock delta is within tolerance: 98.382649ms
	I1207 20:17:52.214522   25590 start.go:83] releasing machines lock for "ingress-addon-legacy-393627", held for 25.356723498s
	I1207 20:17:52.214548   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:52.214797   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetIP
	I1207 20:17:52.217445   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.217758   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:52.217790   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.217902   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:52.218411   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:52.218610   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:17:52.218673   25590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:17:52.218720   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:52.218825   25590 ssh_runner.go:195] Run: cat /version.json
	I1207 20:17:52.218845   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:17:52.221075   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.221409   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.221470   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:52.221506   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.221740   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:52.221749   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:52.221778   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:52.221894   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:52.221990   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:17:52.222072   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:52.222147   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:17:52.222201   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:17:52.222266   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:17:52.222397   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:17:52.342445   25590 ssh_runner.go:195] Run: systemctl --version
	I1207 20:17:52.348291   25590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:17:52.516996   25590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 20:17:52.522968   25590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:17:52.523032   25590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:17:52.540673   25590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:17:52.540689   25590 start.go:475] detecting cgroup driver to use...
	I1207 20:17:52.540735   25590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:17:52.559335   25590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:17:52.573286   25590 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:17:52.573323   25590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:17:52.587845   25590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:17:52.601989   25590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:17:52.710696   25590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:17:52.818944   25590 docker.go:219] disabling docker service ...
	I1207 20:17:52.819010   25590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:17:52.831976   25590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:17:52.843732   25590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:17:52.949210   25590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:17:53.054853   25590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:17:53.066523   25590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:17:53.082787   25590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1207 20:17:53.082837   25590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:17:53.092232   25590 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:17:53.092288   25590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:17:53.100860   25590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:17:53.109292   25590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:17:53.117708   25590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:17:53.126476   25590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:17:53.133671   25590 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:17:53.133710   25590 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:17:53.145427   25590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:17:53.154319   25590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:17:53.251618   25590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:17:53.425292   25590 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:17:53.425371   25590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:17:53.429771   25590 start.go:543] Will wait 60s for crictl version
	I1207 20:17:53.429826   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:53.433342   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:17:53.472406   25590 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:17:53.472480   25590 ssh_runner.go:195] Run: crio --version
	I1207 20:17:53.519429   25590 ssh_runner.go:195] Run: crio --version
	I1207 20:17:53.565154   25590 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1207 20:17:53.566557   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetIP
	I1207 20:17:53.569197   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:53.569559   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:17:53.569597   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:17:53.569789   25590 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:17:53.573637   25590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:17:53.586111   25590 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1207 20:17:53.586160   25590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:17:53.619562   25590 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1207 20:17:53.619619   25590 ssh_runner.go:195] Run: which lz4
	I1207 20:17:53.623136   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1207 20:17:53.623223   25590 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:17:53.627120   25590 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:17:53.627141   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1207 20:17:55.597339   25590 crio.go:444] Took 1.974143 seconds to copy over tarball
	I1207 20:17:55.597401   25590 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:17:58.864154   25590 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.266712599s)
	I1207 20:17:58.864182   25590 crio.go:451] Took 3.266820 seconds to extract the tarball
	I1207 20:17:58.864192   25590 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:17:58.906448   25590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:17:58.954608   25590 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1207 20:17:58.954645   25590 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 20:17:58.954736   25590 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:17:58.954779   25590 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:17:58.954786   25590 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1207 20:17:58.954794   25590 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1207 20:17:58.954831   25590 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:17:58.954764   25590 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:17:58.954794   25590 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:17:58.954741   25590 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:17:58.956101   25590 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1207 20:17:58.956113   25590 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:17:58.956138   25590 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:17:58.956152   25590 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:17:58.956158   25590 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:17:58.956189   25590 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:17:58.956200   25590 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:17:58.956188   25590 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1207 20:17:59.144274   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1207 20:17:59.167909   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1207 20:17:59.184382   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:17:59.188476   25590 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1207 20:17:59.188513   25590 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1207 20:17:59.188550   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.215837   25590 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1207 20:17:59.215879   25590 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1207 20:17:59.215925   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.239186   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1207 20:17:59.239217   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1207 20:17:59.239249   25590 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1207 20:17:59.239282   25590 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:17:59.239312   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.258210   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:17:59.260584   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:17:59.267100   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1207 20:17:59.307820   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:17:59.323420   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1207 20:17:59.323420   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1207 20:17:59.323518   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:17:59.375778   25590 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1207 20:17:59.375819   25590 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:17:59.375876   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.456506   25590 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1207 20:17:59.456548   25590 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:17:59.456561   25590 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1207 20:17:59.456631   25590 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:17:59.456594   25590 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1207 20:17:59.456701   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:17:59.456719   25590 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:17:59.456638   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1207 20:17:59.456703   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.456752   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.456598   25590 ssh_runner.go:195] Run: which crictl
	I1207 20:17:59.494323   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:17:59.494408   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1207 20:17:59.494454   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1207 20:17:59.494461   25590 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:17:59.552042   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1207 20:17:59.561620   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1207 20:17:59.568145   25590 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1207 20:17:59.885866   25590 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:18:00.026547   25590 cache_images.go:92] LoadImages completed in 1.071884506s
	W1207 20:18:00.026659   25590 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1207 20:18:00.026742   25590 ssh_runner.go:195] Run: crio config
	I1207 20:18:00.087355   25590 cni.go:84] Creating CNI manager for ""
	I1207 20:18:00.087376   25590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:18:00.087390   25590 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:18:00.087407   25590 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.25 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-393627 NodeName:ingress-addon-legacy-393627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 20:18:00.087557   25590 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-393627"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:18:00.087644   25590 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-393627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-393627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:18:00.087713   25590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1207 20:18:00.096517   25590 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:18:00.096593   25590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:18:00.104968   25590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1207 20:18:00.121436   25590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1207 20:18:00.137322   25590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1207 20:18:00.153584   25590 ssh_runner.go:195] Run: grep 192.168.39.25	control-plane.minikube.internal$ /etc/hosts
	I1207 20:18:00.157250   25590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:18:00.169735   25590 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627 for IP: 192.168.39.25
	I1207 20:18:00.169767   25590 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.169943   25590 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:18:00.169993   25590 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:18:00.170053   25590 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key
	I1207 20:18:00.170069   25590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt with IP's: []
	I1207 20:18:00.308174   25590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt ...
	I1207 20:18:00.308206   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: {Name:mk1522ba6273525e0e13c6c01f03e932c6226b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.308359   25590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key ...
	I1207 20:18:00.308372   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key: {Name:mk7410f244d7cfdaf0ababf7265cd24b752bbda2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.308445   25590 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key.57fdfe55
	I1207 20:18:00.308460   25590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt.57fdfe55 with IP's: [192.168.39.25 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 20:18:00.365041   25590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt.57fdfe55 ...
	I1207 20:18:00.365071   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt.57fdfe55: {Name:mkf0656ee9a3bbe248138d3f6255c7f838c14d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.365217   25590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key.57fdfe55 ...
	I1207 20:18:00.365230   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key.57fdfe55: {Name:mkb8e9f73393d2a656a4ef596c644521aeaefd0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.365292   25590 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt.57fdfe55 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt
	I1207 20:18:00.365368   25590 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key.57fdfe55 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key
	I1207 20:18:00.365432   25590 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.key
	I1207 20:18:00.365454   25590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.crt with IP's: []
	I1207 20:18:00.659817   25590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.crt ...
	I1207 20:18:00.659848   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.crt: {Name:mkec0587641fb28a138f94edc70b1b499d99b52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.659994   25590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.key ...
	I1207 20:18:00.660007   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.key: {Name:mk55015c0ae8c141b2bb6421dbb6e0f246333194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:00.660072   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 20:18:00.660089   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 20:18:00.660103   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 20:18:00.660116   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 20:18:00.660128   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:18:00.660140   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:18:00.660152   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:18:00.660165   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:18:00.660219   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:18:00.660252   25590 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:18:00.660262   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:18:00.660290   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:18:00.660314   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:18:00.660335   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:18:00.660375   25590 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:18:00.660401   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:18:00.660413   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:18:00.660429   25590 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:18:00.661050   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:18:00.685625   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 20:18:00.709696   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:18:00.732919   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 20:18:00.756509   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:18:00.779080   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:18:00.802249   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:18:00.825791   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:18:00.848769   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:18:00.871719   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:18:00.895048   25590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:18:00.918842   25590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:18:00.935066   25590 ssh_runner.go:195] Run: openssl version
	I1207 20:18:00.940535   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:18:00.951230   25590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:18:00.955645   25590 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:18:00.955703   25590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:18:00.961347   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:18:00.971721   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:18:00.982554   25590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:18:00.987481   25590 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:18:00.987535   25590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:18:00.993563   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:18:01.003676   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:18:01.014757   25590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:18:01.020383   25590 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:18:01.020439   25590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:18:01.026121   25590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:18:01.035758   25590 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:18:01.039782   25590 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:18:01.039835   25590 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-393627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-393627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:18:01.039951   25590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 20:18:01.040000   25590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:18:01.079184   25590 cri.go:89] found id: ""
	I1207 20:18:01.079249   25590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:18:01.088920   25590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:18:01.098328   25590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:18:01.107301   25590 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:18:01.107353   25590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 20:18:01.166394   25590 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1207 20:18:01.166512   25590 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 20:18:01.297087   25590 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:18:01.297297   25590 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:18:01.297440   25590 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:18:01.510113   25590 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:18:01.511212   25590 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:18:01.511321   25590 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 20:18:01.621656   25590 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:18:01.624074   25590 out.go:204]   - Generating certificates and keys ...
	I1207 20:18:01.624183   25590 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 20:18:01.624260   25590 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 20:18:01.917610   25590 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:18:01.987733   25590 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:18:02.260498   25590 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 20:18:02.709236   25590 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 20:18:02.829647   25590 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 20:18:02.829841   25590 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-393627 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I1207 20:18:02.986668   25590 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 20:18:02.986962   25590 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-393627 localhost] and IPs [192.168.39.25 127.0.0.1 ::1]
	I1207 20:18:03.444297   25590 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:18:03.682954   25590 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:18:03.933888   25590 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 20:18:03.934270   25590 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:18:04.031944   25590 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:18:04.196004   25590 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:18:04.352813   25590 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:18:04.513959   25590 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:18:04.514711   25590 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:18:04.516611   25590 out.go:204]   - Booting up control plane ...
	I1207 20:18:04.516717   25590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:18:04.526303   25590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:18:04.526398   25590 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:18:04.526508   25590 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:18:04.528428   25590 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:18:13.528859   25590 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003628 seconds
	I1207 20:18:13.529021   25590 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:18:13.542960   25590 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:18:14.069248   25590 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:18:14.069453   25590 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-393627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 20:18:14.580023   25590 kubeadm.go:322] [bootstrap-token] Using token: uniz05.lmm283zvgtnrne55
	I1207 20:18:14.581675   25590 out.go:204]   - Configuring RBAC rules ...
	I1207 20:18:14.581796   25590 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:18:14.595821   25590 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:18:14.605859   25590 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:18:14.610116   25590 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:18:14.614652   25590 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:18:14.618117   25590 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:18:14.630061   25590 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:18:14.972047   25590 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 20:18:15.088338   25590 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 20:18:15.088658   25590 kubeadm.go:322] 
	I1207 20:18:15.088748   25590 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 20:18:15.088761   25590 kubeadm.go:322] 
	I1207 20:18:15.088881   25590 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 20:18:15.088901   25590 kubeadm.go:322] 
	I1207 20:18:15.088934   25590 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 20:18:15.089030   25590 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:18:15.089101   25590 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:18:15.089110   25590 kubeadm.go:322] 
	I1207 20:18:15.089175   25590 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 20:18:15.089266   25590 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:18:15.089361   25590 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:18:15.089374   25590 kubeadm.go:322] 
	I1207 20:18:15.089500   25590 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:18:15.089608   25590 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 20:18:15.089619   25590 kubeadm.go:322] 
	I1207 20:18:15.089723   25590 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uniz05.lmm283zvgtnrne55 \
	I1207 20:18:15.089861   25590 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 20:18:15.089919   25590 kubeadm.go:322]     --control-plane 
	I1207 20:18:15.089956   25590 kubeadm.go:322] 
	I1207 20:18:15.090070   25590 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:18:15.090084   25590 kubeadm.go:322] 
	I1207 20:18:15.090177   25590 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uniz05.lmm283zvgtnrne55 \
	I1207 20:18:15.090314   25590 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:18:15.090574   25590 kubeadm.go:322] W1207 20:18:01.148581     961 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1207 20:18:15.090702   25590 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:18:15.090877   25590 kubeadm.go:322] W1207 20:18:04.508237     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 20:18:15.091069   25590 kubeadm.go:322] W1207 20:18:04.509592     961 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 20:18:15.091096   25590 cni.go:84] Creating CNI manager for ""
	I1207 20:18:15.091110   25590 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:18:15.092946   25590 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 20:18:15.094452   25590 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 20:18:15.103981   25590 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 20:18:15.128323   25590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:18:15.128405   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:15.128446   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=ingress-addon-legacy-393627 minikube.k8s.io/updated_at=2023_12_07T20_18_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:15.169588   25590 ops.go:34] apiserver oom_adj: -16
	I1207 20:18:15.309500   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:15.474135   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:16.131019   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:16.631429   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:17.130663   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:17.631309   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:18.131130   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:18.631229   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:19.131476   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:19.630825   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:20.130644   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:20.631329   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:21.131064   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:21.631288   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:22.130471   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:22.630591   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:23.130477   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:23.631185   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:24.130545   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:24.631147   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:25.131032   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:25.631032   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:26.131208   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:26.630624   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:27.131411   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:27.631171   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:28.130975   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:28.631140   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:29.131160   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:29.630690   25590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:18:29.842701   25590 kubeadm.go:1088] duration metric: took 14.714342282s to wait for elevateKubeSystemPrivileges.
	I1207 20:18:29.842728   25590 kubeadm.go:406] StartCluster complete in 28.802896251s
	I1207 20:18:29.842749   25590 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:29.842847   25590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:18:29.843774   25590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:18:29.844083   25590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:18:29.844116   25590 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:18:29.844191   25590 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-393627"
	I1207 20:18:29.844247   25590 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-393627"
	I1207 20:18:29.844247   25590 config.go:182] Loaded profile config "ingress-addon-legacy-393627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1207 20:18:29.844296   25590 host.go:66] Checking if "ingress-addon-legacy-393627" exists ...
	I1207 20:18:29.844199   25590 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-393627"
	I1207 20:18:29.844354   25590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-393627"
	I1207 20:18:29.844750   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:18:29.844784   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:18:29.844712   25590 kapi.go:59] client config for ingress-addon-legacy-393627: &rest.Config{Host:"https://192.168.39.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:18:29.844842   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:18:29.844880   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:18:29.845529   25590 cert_rotation.go:137] Starting client certificate rotation controller
	I1207 20:18:29.859716   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1207 20:18:29.859939   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I1207 20:18:29.860274   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:18:29.860360   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:18:29.860796   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:18:29.860816   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:18:29.860914   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:18:29.860943   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:18:29.861159   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:18:29.861245   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:18:29.861444   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetState
	I1207 20:18:29.861702   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:18:29.861725   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:18:29.863850   25590 kapi.go:59] client config for ingress-addon-legacy-393627: &rest.Config{Host:"https://192.168.39.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:18:29.864100   25590 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-393627"
	I1207 20:18:29.864141   25590 host.go:66] Checking if "ingress-addon-legacy-393627" exists ...
	I1207 20:18:29.864442   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:18:29.864489   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:18:29.877341   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I1207 20:18:29.877892   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:18:29.878407   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:18:29.878436   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:18:29.878802   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:18:29.878987   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetState
	I1207 20:18:29.879802   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I1207 20:18:29.880215   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:18:29.880735   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:18:29.880759   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:18:29.880878   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:18:29.881077   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:18:29.883208   25590 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:18:29.881555   25590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:18:29.884935   25590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:18:29.885033   25590 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:18:29.885055   25590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:18:29.885077   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:18:29.888274   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:18:29.888755   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:18:29.888780   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:18:29.888870   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:18:29.889081   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:18:29.889228   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:18:29.889366   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:18:29.902109   25590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1207 20:18:29.902554   25590 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:18:29.903105   25590 main.go:141] libmachine: Using API Version  1
	I1207 20:18:29.903135   25590 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:18:29.903472   25590 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:18:29.903742   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetState
	I1207 20:18:29.905469   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .DriverName
	I1207 20:18:29.905758   25590 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:18:29.905777   25590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:18:29.905809   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHHostname
	I1207 20:18:29.908429   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:18:29.908803   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:d8:28", ip: ""} in network mk-ingress-addon-legacy-393627: {Iface:virbr1 ExpiryTime:2023-12-07 21:17:42 +0000 UTC Type:0 Mac:52:54:00:3d:d8:28 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:ingress-addon-legacy-393627 Clientid:01:52:54:00:3d:d8:28}
	I1207 20:18:29.908848   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | domain ingress-addon-legacy-393627 has defined IP address 192.168.39.25 and MAC address 52:54:00:3d:d8:28 in network mk-ingress-addon-legacy-393627
	I1207 20:18:29.908936   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHPort
	I1207 20:18:29.909145   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHKeyPath
	I1207 20:18:29.909266   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .GetSSHUsername
	I1207 20:18:29.909378   25590 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/ingress-addon-legacy-393627/id_rsa Username:docker}
	I1207 20:18:30.009788   25590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 20:18:30.029815   25590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1207 20:18:30.106713   25590 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-393627" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1207 20:18:30.106747   25590 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1207 20:18:30.106766   25590 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:18:30.108583   25590 out.go:177] * Verifying Kubernetes components...
	I1207 20:18:30.110283   25590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:18:30.148097   25590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:18:30.878660   25590 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1207 20:18:31.252228   25590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.222381792s)
	I1207 20:18:31.252277   25590 main.go:141] libmachine: Making call to close driver server
	I1207 20:18:31.252290   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Close
	I1207 20:18:31.252321   25590 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.14200541s)
	I1207 20:18:31.252371   25590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.104243224s)
	I1207 20:18:31.252397   25590 main.go:141] libmachine: Making call to close driver server
	I1207 20:18:31.252407   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Close
	I1207 20:18:31.252729   25590 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:18:31.252753   25590 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:18:31.252764   25590 main.go:141] libmachine: Making call to close driver server
	I1207 20:18:31.252781   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Close
	I1207 20:18:31.253116   25590 kapi.go:59] client config for ingress-addon-legacy-393627: &rest.Config{Host:"https://192.168.39.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:18:31.253451   25590 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-393627" to be "Ready" ...
	I1207 20:18:31.253679   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Closing plugin on server side
	I1207 20:18:31.253697   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Closing plugin on server side
	I1207 20:18:31.253733   25590 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:18:31.253752   25590 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:18:31.253763   25590 main.go:141] libmachine: Making call to close driver server
	I1207 20:18:31.253763   25590 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:18:31.253772   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Close
	I1207 20:18:31.253774   25590 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:18:31.253973   25590 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:18:31.253991   25590 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:18:31.259960   25590 node_ready.go:49] node "ingress-addon-legacy-393627" has status "Ready":"True"
	I1207 20:18:31.259985   25590 node_ready.go:38] duration metric: took 6.509296ms waiting for node "ingress-addon-legacy-393627" to be "Ready" ...
	I1207 20:18:31.259996   25590 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:18:31.273824   25590 main.go:141] libmachine: Making call to close driver server
	I1207 20:18:31.273846   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) Calling .Close
	I1207 20:18:31.274152   25590 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:18:31.274173   25590 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:18:31.274176   25590 main.go:141] libmachine: (ingress-addon-legacy-393627) DBG | Closing plugin on server side
	I1207 20:18:31.275908   25590 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 20:18:31.277832   25590 addons.go:502] enable addons completed in 1.433715515s: enabled=[storage-provisioner default-storageclass]
	I1207 20:18:31.283023   25590 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace to be "Ready" ...
	I1207 20:18:33.308734   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:35.808218   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:37.808490   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:39.809011   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:42.308589   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:44.809449   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:47.308430   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:49.309303   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:51.808648   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:54.308257   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:56.308700   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:18:58.808804   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:19:00.809042   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:19:02.809080   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:19:05.310564   25590 pod_ready.go:102] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"False"
	I1207 20:19:07.317379   25590 pod_ready.go:92] pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:07.317406   25590 pod_ready.go:81] duration metric: took 36.034363557s waiting for pod "coredns-66bff467f8-t8qfb" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:07.317419   25590 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-vwzj7" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:09.335396   25590 pod_ready.go:102] pod "coredns-66bff467f8-vwzj7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:19:10.335860   25590 pod_ready.go:92] pod "coredns-66bff467f8-vwzj7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.335891   25590 pod_ready.go:81] duration metric: took 3.018463762s waiting for pod "coredns-66bff467f8-vwzj7" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.335902   25590 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.341403   25590 pod_ready.go:92] pod "etcd-ingress-addon-legacy-393627" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.341425   25590 pod_ready.go:81] duration metric: took 5.515002ms waiting for pod "etcd-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.341435   25590 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.346591   25590 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-393627" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.346606   25590 pod_ready.go:81] duration metric: took 5.164181ms waiting for pod "kube-apiserver-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.346613   25590 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.351892   25590 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-393627" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.351910   25590 pod_ready.go:81] duration metric: took 5.291304ms waiting for pod "kube-controller-manager-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.351918   25590 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k65fs" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.502546   25590 request.go:629] Waited for 148.290701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/nodes/ingress-addon-legacy-393627
	I1207 20:19:10.506302   25590 pod_ready.go:92] pod "kube-proxy-k65fs" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.506322   25590 pod_ready.go:81] duration metric: took 154.397325ms waiting for pod "kube-proxy-k65fs" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.506332   25590 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.702768   25590 request.go:629] Waited for 196.366745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-393627
	I1207 20:19:10.902532   25590 request.go:629] Waited for 196.113269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/nodes/ingress-addon-legacy-393627
	I1207 20:19:10.906042   25590 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-393627" in "kube-system" namespace has status "Ready":"True"
	I1207 20:19:10.906065   25590 pod_ready.go:81] duration metric: took 399.725789ms waiting for pod "kube-scheduler-ingress-addon-legacy-393627" in "kube-system" namespace to be "Ready" ...
	I1207 20:19:10.906075   25590 pod_ready.go:38] duration metric: took 39.646068235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:19:10.906089   25590 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:19:10.906149   25590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:19:10.922705   25590 api_server.go:72] duration metric: took 40.81590846s to wait for apiserver process to appear ...
	I1207 20:19:10.922734   25590 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:19:10.922753   25590 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I1207 20:19:10.929503   25590 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I1207 20:19:10.930777   25590 api_server.go:141] control plane version: v1.18.20
	I1207 20:19:10.930798   25590 api_server.go:131] duration metric: took 8.057068ms to wait for apiserver health ...
	I1207 20:19:10.930808   25590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:19:11.102217   25590 request.go:629] Waited for 171.347439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/namespaces/kube-system/pods
	I1207 20:19:11.108707   25590 system_pods.go:59] 8 kube-system pods found
	I1207 20:19:11.108738   25590 system_pods.go:61] "coredns-66bff467f8-t8qfb" [6aa3bcef-ef49-4e53-9419-a55a124c7962] Running
	I1207 20:19:11.108745   25590 system_pods.go:61] "coredns-66bff467f8-vwzj7" [34e419af-04f4-42b8-abdb-78a767f1f766] Running
	I1207 20:19:11.108751   25590 system_pods.go:61] "etcd-ingress-addon-legacy-393627" [9c66bb74-fa26-44f0-84f0-c371dfdf2ce3] Running
	I1207 20:19:11.108757   25590 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-393627" [1bde7f97-0a99-47fc-9ed1-e0695ab6c735] Running
	I1207 20:19:11.108762   25590 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-393627" [66e46c13-6c18-407e-a42c-caae4fe6a261] Running
	I1207 20:19:11.108768   25590 system_pods.go:61] "kube-proxy-k65fs" [dff73c76-093a-4e53-954c-8f3be8487d24] Running
	I1207 20:19:11.108774   25590 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-393627" [cc427f61-4b04-475e-a646-9d78840f52ab] Running
	I1207 20:19:11.108779   25590 system_pods.go:61] "storage-provisioner" [75ba2cd1-8465-4da0-bcff-ddc67c23c85a] Running
	I1207 20:19:11.108787   25590 system_pods.go:74] duration metric: took 177.972252ms to wait for pod list to return data ...
	I1207 20:19:11.108797   25590 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:19:11.302265   25590 request.go:629] Waited for 193.386071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:19:11.306187   25590 default_sa.go:45] found service account: "default"
	I1207 20:19:11.306213   25590 default_sa.go:55] duration metric: took 197.40517ms for default service account to be created ...
	I1207 20:19:11.306222   25590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:19:11.502677   25590 request.go:629] Waited for 196.382459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/namespaces/kube-system/pods
	I1207 20:19:11.509830   25590 system_pods.go:86] 8 kube-system pods found
	I1207 20:19:11.509854   25590 system_pods.go:89] "coredns-66bff467f8-t8qfb" [6aa3bcef-ef49-4e53-9419-a55a124c7962] Running
	I1207 20:19:11.509862   25590 system_pods.go:89] "coredns-66bff467f8-vwzj7" [34e419af-04f4-42b8-abdb-78a767f1f766] Running
	I1207 20:19:11.509871   25590 system_pods.go:89] "etcd-ingress-addon-legacy-393627" [9c66bb74-fa26-44f0-84f0-c371dfdf2ce3] Running
	I1207 20:19:11.509879   25590 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-393627" [1bde7f97-0a99-47fc-9ed1-e0695ab6c735] Running
	I1207 20:19:11.509886   25590 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-393627" [66e46c13-6c18-407e-a42c-caae4fe6a261] Running
	I1207 20:19:11.509891   25590 system_pods.go:89] "kube-proxy-k65fs" [dff73c76-093a-4e53-954c-8f3be8487d24] Running
	I1207 20:19:11.509897   25590 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-393627" [cc427f61-4b04-475e-a646-9d78840f52ab] Running
	I1207 20:19:11.509904   25590 system_pods.go:89] "storage-provisioner" [75ba2cd1-8465-4da0-bcff-ddc67c23c85a] Running
	I1207 20:19:11.509915   25590 system_pods.go:126] duration metric: took 203.686388ms to wait for k8s-apps to be running ...
	I1207 20:19:11.509937   25590 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:19:11.509986   25590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:19:11.524991   25590 system_svc.go:56] duration metric: took 15.048473ms WaitForService to wait for kubelet.
	I1207 20:19:11.525011   25590 kubeadm.go:581] duration metric: took 41.418223943s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:19:11.525033   25590 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:19:11.702409   25590 request.go:629] Waited for 177.319658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.25:8443/api/v1/nodes
	I1207 20:19:11.706710   25590 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:19:11.706744   25590 node_conditions.go:123] node cpu capacity is 2
	I1207 20:19:11.706758   25590 node_conditions.go:105] duration metric: took 181.718781ms to run NodePressure ...
	I1207 20:19:11.706772   25590 start.go:228] waiting for startup goroutines ...
	I1207 20:19:11.706784   25590 start.go:233] waiting for cluster config update ...
	I1207 20:19:11.706800   25590 start.go:242] writing updated cluster config ...
	I1207 20:19:11.707146   25590 ssh_runner.go:195] Run: rm -f paused
	I1207 20:19:11.754426   25590 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1207 20:19:11.756726   25590 out.go:177] 
	W1207 20:19:11.758189   25590 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1207 20:19:11.759692   25590 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1207 20:19:11.761310   25590 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-393627" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 20:17:38 UTC, ends at Thu 2023-12-07 20:22:23 UTC. --
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.747884446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980543747870510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7dd5e2b5-104b-4eed-b8d6-2c695c20da09 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.748302855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=61628b72-c2e9-4cc8-9442-a0d4dd2de5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.748377608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=61628b72-c2e9-4cc8-9442-a0d4dd2de5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.748673426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b1b719199f7288e5e300e914f799b406c82f7aec127e896ee9247834a16b5fe,PodSandboxId:4343caa2484ab346599c7b0c197e2edfc9d40c28979ac0694ac10cc12b6e82b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701980534455331302,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p2vqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bebcca27-270b-4dce-9115-05d15b8c1b2c,},Annotations:map[string]string{io.kubernetes.container.hash: a2ed8f3f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551cf4fca4c6ca9261612ca73cff267af055a4c4f7007d0eb82fe77641f0a4a6,PodSandboxId:fe685bc90cf98e4fe06c37ea4faa06ba45bbb84eed24b575dbdb3f0239c9a980,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701980391726108324,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8b54a24-0cbf-4d6f-83d8-7c9169c1361c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9ba728b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa634228d1efa7f7a47bafffd7f6b7a468b17ae123689449e3ad4b27e4aafef0,PodSandboxId:2ece5d11ae06cb679073b9e7b7bfa0d3b0be769339cba61e9c3359c66ef722cf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701980367892607476,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-wbcrx,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf66d889-e5e7-4ea0-85e2-010059ecd24c,},Annotations:map[string]string{io.kubernetes.container.hash: a3121ecf,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc97397240a6ad401cb94a2488bc4b11984d51ef852426f97f7aef136bc70c0,PodSandboxId:71f1802816a9c00d5e4b796b6d67e50c3159fa6980f0738ec5f589a3e6bbbdec,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980358551675406,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5xzg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4feb13a5-c1de-48bc-afb3-6595ee58ce54,},Annotations:map[string]string{io.kubernetes.container.hash: 1252b940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6decc21c959d718f023d7f61c2e3cae371789a2270cc2653b723de26b29327cf,PodSandboxId:5a842e20179808110666de1577ab52abe60906336bb59749e24c93927d27164d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980357402348685,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5jmhd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21f6f74-9ec4-40dd-90c8-ebe2f316c1b0,},Annotations:map[string]string{io.kubernetes.container.hash: 375d9520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f18a1d5c78b1575e9d62580f1a8b950d3668a2a15e1b6c365e4edea1e28e405,PodSandboxId:acdf7be70183b7d04b3babecd2f7ee8e20c149d565090cfe6404d57557d99aae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980312074657364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ba2cd1-8465-4da0-bcff-ddc67c23c85a,},Annotations:map[string]string{io.kubernetes.container.hash: 78f3fe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f772ff4685628ec719ac16868a5d02879f01085887d45cf07a54ed1038c5a409,PodSandboxId:a7c16e80709c3153ba9d0609a60534acbafc9b2d8a1664dac99f2ae96ff95db2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701980311618869489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k65fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff73c76-093a-4e53-954c-8f3be8487d24,},Annotations:map[string]string{io.kubernetes.container.hash: a863891d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1e4f41bcc20f7c222a63cb04654c88893a5232527eb502ef2285ebf7502128,PodSandboxId:e0c738ba499e05f0ecda225361baee549f73076f4615508e09dfe2fc3614944a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310954296562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vwzj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e419af-04f4-42b8-abdb-78a767f1f766,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5b6033e84663e1266b69a5c4a5757ff01fc38707ae27cf41c05644005a3cea,PodS
andboxId:44411f2559c9f20c1be15f1c42059be2aa9daa79d3a85cb0a66f4fe601160a7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310931048791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-t8qfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aa3bcef-ef49-4e53-9419-a55a124c7962,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2cae134f80a5bd93d6793e23333cc7412a50ff5ef9b55609189cab586a5537,PodSandboxId:59baba4f46417e4498da3a3afbf41fbf6f3b3efe22e82d0195bb2fd382a146f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701980287528582406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11708fc53219b9bb2ca683bde0830989,},Annotations:map[string]string{io.kubernetes.container.hash: 183bfa3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0039ff8bb08920a6356d1b38d71d0c4f05d1ae8bef2f0ee12abc48446ac9865,PodSandboxId:899ad6ad486ad070296421bdb262f3903b57b379d2e47bfb16f9da4214388bb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701980286371865801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a07c04136ab527aedf1d540bfc6b5d8,},Annotations:map[string]string{io.kubernetes.container.hash: c7485a13,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755b01a4e0aaf0d145fdf8f1b99a08accf6d6c2728b86367fa1603c95dc23abd,PodSandboxId:bf7d4999aaa874d8961f6c0ea2929c5be068147506f858f2837e5d69044c5e8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701980286212246633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bb1bf509fbda993889b98caa7e5cd9e5e731463a305582489e75204abab814,PodSandboxId:14b6a1c823e4fc18a8054be7ce0cf075403a2526d5ca1c3b2c5ba1e315dfa9f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701980286016770493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=61628b72-c2e9-4cc8-9442-a0d4dd2de5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.789009549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e750b2d1-095d-4a9c-b443-16f18fc8a32f name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.789101484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e750b2d1-095d-4a9c-b443-16f18fc8a32f name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.790368562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1292e20d-22e9-4e26-b699-ef05d10bed55 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.790928611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980543790910565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=1292e20d-22e9-4e26-b699-ef05d10bed55 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.791361370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1bdcda72-14a2-495d-81d9-e0cc13283050 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.791488765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1bdcda72-14a2-495d-81d9-e0cc13283050 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.791785088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b1b719199f7288e5e300e914f799b406c82f7aec127e896ee9247834a16b5fe,PodSandboxId:4343caa2484ab346599c7b0c197e2edfc9d40c28979ac0694ac10cc12b6e82b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701980534455331302,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p2vqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bebcca27-270b-4dce-9115-05d15b8c1b2c,},Annotations:map[string]string{io.kubernetes.container.hash: a2ed8f3f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551cf4fca4c6ca9261612ca73cff267af055a4c4f7007d0eb82fe77641f0a4a6,PodSandboxId:fe685bc90cf98e4fe06c37ea4faa06ba45bbb84eed24b575dbdb3f0239c9a980,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701980391726108324,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8b54a24-0cbf-4d6f-83d8-7c9169c1361c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9ba728b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa634228d1efa7f7a47bafffd7f6b7a468b17ae123689449e3ad4b27e4aafef0,PodSandboxId:2ece5d11ae06cb679073b9e7b7bfa0d3b0be769339cba61e9c3359c66ef722cf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701980367892607476,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-wbcrx,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf66d889-e5e7-4ea0-85e2-010059ecd24c,},Annotations:map[string]string{io.kubernetes.container.hash: a3121ecf,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc97397240a6ad401cb94a2488bc4b11984d51ef852426f97f7aef136bc70c0,PodSandboxId:71f1802816a9c00d5e4b796b6d67e50c3159fa6980f0738ec5f589a3e6bbbdec,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980358551675406,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5xzg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4feb13a5-c1de-48bc-afb3-6595ee58ce54,},Annotations:map[string]string{io.kubernetes.container.hash: 1252b940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6decc21c959d718f023d7f61c2e3cae371789a2270cc2653b723de26b29327cf,PodSandboxId:5a842e20179808110666de1577ab52abe60906336bb59749e24c93927d27164d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980357402348685,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5jmhd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21f6f74-9ec4-40dd-90c8-ebe2f316c1b0,},Annotations:map[string]string{io.kubernetes.container.hash: 375d9520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f18a1d5c78b1575e9d62580f1a8b950d3668a2a15e1b6c365e4edea1e28e405,PodSandboxId:acdf7be70183b7d04b3babecd2f7ee8e20c149d565090cfe6404d57557d99aae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980312074657364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ba2cd1-8465-4da0-bcff-ddc67c23c85a,},Annotations:map[string]string{io.kubernetes.container.hash: 78f3fe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f772ff4685628ec719ac16868a5d02879f01085887d45cf07a54ed1038c5a409,PodSandboxId:a7c16e80709c3153ba9d0609a60534acbafc9b2d8a1664dac99f2ae96ff95db2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701980311618869489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k65fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff73c76-093a-4e53-954c-8f3be8487d24,},Annotations:map[string]string{io.kubernetes.container.hash: a863891d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1e4f41bcc20f7c222a63cb04654c88893a5232527eb502ef2285ebf7502128,PodSandboxId:e0c738ba499e05f0ecda225361baee549f73076f4615508e09dfe2fc3614944a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310954296562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vwzj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e419af-04f4-42b8-abdb-78a767f1f766,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5b6033e84663e1266b69a5c4a5757ff01fc38707ae27cf41c05644005a3cea,PodS
andboxId:44411f2559c9f20c1be15f1c42059be2aa9daa79d3a85cb0a66f4fe601160a7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310931048791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-t8qfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aa3bcef-ef49-4e53-9419-a55a124c7962,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2cae134f80a5bd93d6793e23333cc7412a50ff5ef9b55609189cab586a5537,PodSandboxId:59baba4f46417e4498da3a3afbf41fbf6f3b3efe22e82d0195bb2fd382a146f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701980287528582406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11708fc53219b9bb2ca683bde0830989,},Annotations:map[string]string{io.kubernetes.container.hash: 183bfa3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0039ff8bb08920a6356d1b38d71d0c4f05d1ae8bef2f0ee12abc48446ac9865,PodSandboxId:899ad6ad486ad070296421bdb262f3903b57b379d2e47bfb16f9da4214388bb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701980286371865801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a07c04136ab527aedf1d540bfc6b5d8,},Annotations:map[string]string{io.kubernetes.container.hash: c7485a13,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755b01a4e0aaf0d145fdf8f1b99a08accf6d6c2728b86367fa1603c95dc23abd,PodSandboxId:bf7d4999aaa874d8961f6c0ea2929c5be068147506f858f2837e5d69044c5e8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701980286212246633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bb1bf509fbda993889b98caa7e5cd9e5e731463a305582489e75204abab814,PodSandboxId:14b6a1c823e4fc18a8054be7ce0cf075403a2526d5ca1c3b2c5ba1e315dfa9f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701980286016770493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1bdcda72-14a2-495d-81d9-e0cc13283050 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.831510318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6332278c-a2f7-4246-93ac-6bfe5eb615c1 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.831591026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6332278c-a2f7-4246-93ac-6bfe5eb615c1 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.832264201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f54e18a5-2385-4510-b6d6-3a8528e1e06a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.832851936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980543832835512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=f54e18a5-2385-4510-b6d6-3a8528e1e06a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.833495363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d799d2c-6e77-4db9-8dab-2d889c8bf89c name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.833567243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d799d2c-6e77-4db9-8dab-2d889c8bf89c name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.833826844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b1b719199f7288e5e300e914f799b406c82f7aec127e896ee9247834a16b5fe,PodSandboxId:4343caa2484ab346599c7b0c197e2edfc9d40c28979ac0694ac10cc12b6e82b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701980534455331302,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p2vqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bebcca27-270b-4dce-9115-05d15b8c1b2c,},Annotations:map[string]string{io.kubernetes.container.hash: a2ed8f3f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551cf4fca4c6ca9261612ca73cff267af055a4c4f7007d0eb82fe77641f0a4a6,PodSandboxId:fe685bc90cf98e4fe06c37ea4faa06ba45bbb84eed24b575dbdb3f0239c9a980,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701980391726108324,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8b54a24-0cbf-4d6f-83d8-7c9169c1361c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9ba728b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa634228d1efa7f7a47bafffd7f6b7a468b17ae123689449e3ad4b27e4aafef0,PodSandboxId:2ece5d11ae06cb679073b9e7b7bfa0d3b0be769339cba61e9c3359c66ef722cf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701980367892607476,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-wbcrx,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf66d889-e5e7-4ea0-85e2-010059ecd24c,},Annotations:map[string]string{io.kubernetes.container.hash: a3121ecf,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc97397240a6ad401cb94a2488bc4b11984d51ef852426f97f7aef136bc70c0,PodSandboxId:71f1802816a9c00d5e4b796b6d67e50c3159fa6980f0738ec5f589a3e6bbbdec,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980358551675406,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5xzg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4feb13a5-c1de-48bc-afb3-6595ee58ce54,},Annotations:map[string]string{io.kubernetes.container.hash: 1252b940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6decc21c959d718f023d7f61c2e3cae371789a2270cc2653b723de26b29327cf,PodSandboxId:5a842e20179808110666de1577ab52abe60906336bb59749e24c93927d27164d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980357402348685,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5jmhd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21f6f74-9ec4-40dd-90c8-ebe2f316c1b0,},Annotations:map[string]string{io.kubernetes.container.hash: 375d9520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f18a1d5c78b1575e9d62580f1a8b950d3668a2a15e1b6c365e4edea1e28e405,PodSandboxId:acdf7be70183b7d04b3babecd2f7ee8e20c149d565090cfe6404d57557d99aae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980312074657364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ba2cd1-8465-4da0-bcff-ddc67c23c85a,},Annotations:map[string]string{io.kubernetes.container.hash: 78f3fe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f772ff4685628ec719ac16868a5d02879f01085887d45cf07a54ed1038c5a409,PodSandboxId:a7c16e80709c3153ba9d0609a60534acbafc9b2d8a1664dac99f2ae96ff95db2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701980311618869489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k65fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff73c76-093a-4e53-954c-8f3be8487d24,},Annotations:map[string]string{io.kubernetes.container.hash: a863891d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1e4f41bcc20f7c222a63cb04654c88893a5232527eb502ef2285ebf7502128,PodSandboxId:e0c738ba499e05f0ecda225361baee549f73076f4615508e09dfe2fc3614944a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310954296562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vwzj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e419af-04f4-42b8-abdb-78a767f1f766,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5b6033e84663e1266b69a5c4a5757ff01fc38707ae27cf41c05644005a3cea,PodS
andboxId:44411f2559c9f20c1be15f1c42059be2aa9daa79d3a85cb0a66f4fe601160a7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310931048791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-t8qfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aa3bcef-ef49-4e53-9419-a55a124c7962,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2cae134f80a5bd93d6793e23333cc7412a50ff5ef9b55609189cab586a5537,PodSandboxId:59baba4f46417e4498da3a3afbf41fbf6f3b3efe22e82d0195bb2fd382a146f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701980287528582406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11708fc53219b9bb2ca683bde0830989,},Annotations:map[string]string{io.kubernetes.container.hash: 183bfa3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0039ff8bb08920a6356d1b38d71d0c4f05d1ae8bef2f0ee12abc48446ac9865,PodSandboxId:899ad6ad486ad070296421bdb262f3903b57b379d2e47bfb16f9da4214388bb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701980286371865801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a07c04136ab527aedf1d540bfc6b5d8,},Annotations:map[string]string{io.kubernetes.container.hash: c7485a13,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755b01a4e0aaf0d145fdf8f1b99a08accf6d6c2728b86367fa1603c95dc23abd,PodSandboxId:bf7d4999aaa874d8961f6c0ea2929c5be068147506f858f2837e5d69044c5e8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701980286212246633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bb1bf509fbda993889b98caa7e5cd9e5e731463a305582489e75204abab814,PodSandboxId:14b6a1c823e4fc18a8054be7ce0cf075403a2526d5ca1c3b2c5ba1e315dfa9f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701980286016770493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d799d2c-6e77-4db9-8dab-2d889c8bf89c name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.868661860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ef82da63-9267-4674-8d7e-5501f7ebda21 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.868746705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ef82da63-9267-4674-8d7e-5501f7ebda21 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.869947292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=54c6691e-7586-4ad1-ae7c-7d0c17337393 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.870512989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980543870404145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=54c6691e-7586-4ad1-ae7c-7d0c17337393 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.871277967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=087dffa5-4fea-4557-bb80-95772e92ceb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.871345301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=087dffa5-4fea-4557-bb80-95772e92ceb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:22:23 ingress-addon-legacy-393627 crio[719]: time="2023-12-07 20:22:23.871688835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b1b719199f7288e5e300e914f799b406c82f7aec127e896ee9247834a16b5fe,PodSandboxId:4343caa2484ab346599c7b0c197e2edfc9d40c28979ac0694ac10cc12b6e82b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701980534455331302,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-p2vqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bebcca27-270b-4dce-9115-05d15b8c1b2c,},Annotations:map[string]string{io.kubernetes.container.hash: a2ed8f3f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551cf4fca4c6ca9261612ca73cff267af055a4c4f7007d0eb82fe77641f0a4a6,PodSandboxId:fe685bc90cf98e4fe06c37ea4faa06ba45bbb84eed24b575dbdb3f0239c9a980,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701980391726108324,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8b54a24-0cbf-4d6f-83d8-7c9169c1361c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9ba728b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa634228d1efa7f7a47bafffd7f6b7a468b17ae123689449e3ad4b27e4aafef0,PodSandboxId:2ece5d11ae06cb679073b9e7b7bfa0d3b0be769339cba61e9c3359c66ef722cf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701980367892607476,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-wbcrx,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf66d889-e5e7-4ea0-85e2-010059ecd24c,},Annotations:map[string]string{io.kubernetes.container.hash: a3121ecf,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebc97397240a6ad401cb94a2488bc4b11984d51ef852426f97f7aef136bc70c0,PodSandboxId:71f1802816a9c00d5e4b796b6d67e50c3159fa6980f0738ec5f589a3e6bbbdec,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980358551675406,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5xzg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4feb13a5-c1de-48bc-afb3-6595ee58ce54,},Annotations:map[string]string{io.kubernetes.container.hash: 1252b940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6decc21c959d718f023d7f61c2e3cae371789a2270cc2653b723de26b29327cf,PodSandboxId:5a842e20179808110666de1577ab52abe60906336bb59749e24c93927d27164d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701980357402348685,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5jmhd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21f6f74-9ec4-40dd-90c8-ebe2f316c1b0,},Annotations:map[string]string{io.kubernetes.container.hash: 375d9520,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f18a1d5c78b1575e9d62580f1a8b950d3668a2a15e1b6c365e4edea1e28e405,PodSandboxId:acdf7be70183b7d04b3babecd2f7ee8e20c149d565090cfe6404d57557d99aae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980312074657364,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ba2cd1-8465-4da0-bcff-ddc67c23c85a,},Annotations:map[string]string{io.kubernetes.container.hash: 78f3fe78,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f772ff4685628ec719ac16868a5d02879f01085887d45cf07a54ed1038c5a409,PodSandboxId:a7c16e80709c3153ba9d0609a60534acbafc9b2d8a1664dac99f2ae96ff95db2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701980311618869489,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k65fs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff73c76-093a-4e53-954c-8f3be8487d24,},Annotations:map[string]string{io.kubernetes.container.hash: a863891d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1e4f41bcc20f7c222a63cb04654c88893a5232527eb502ef2285ebf7502128,PodSandboxId:e0c738ba499e05f0ecda225361baee549f73076f4615508e09dfe2fc3614944a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310954296562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vwzj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e419af-04f4-42b8-abdb-78a767f1f766,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5b6033e84663e1266b69a5c4a5757ff01fc38707ae27cf41c05644005a3cea,PodS
andboxId:44411f2559c9f20c1be15f1c42059be2aa9daa79d3a85cb0a66f4fe601160a7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701980310931048791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-t8qfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6aa3bcef-ef49-4e53-9419-a55a124c7962,},Annotations:map[string]string{io.kubernetes.container.hash: 634cbbe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2cae134f80a5bd93d6793e23333cc7412a50ff5ef9b55609189cab586a5537,PodSandboxId:59baba4f46417e4498da3a3afbf41fbf6f3b3efe22e82d0195bb2fd382a146f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701980287528582406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11708fc53219b9bb2ca683bde0830989,},Annotations:map[string]string{io.kubernetes.container.hash: 183bfa3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0039ff8bb08920a6356d1b38d71d0c4f05d1ae8bef2f0ee12abc48446ac9865,PodSandboxId:899ad6ad486ad070296421bdb262f3903b57b379d2e47bfb16f9da4214388bb8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701980286371865801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a07c04136ab527aedf1d540bfc6b5d8,},Annotations:map[string]string{io.kubernetes.container.hash: c7485a13,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755b01a4e0aaf0d145fdf8f1b99a08accf6d6c2728b86367fa1603c95dc23abd,PodSandboxId:bf7d4999aaa874d8961f6c0ea2929c5be068147506f858f2837e5d69044c5e8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701980286212246633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36bb1bf509fbda993889b98caa7e5cd9e5e731463a305582489e75204abab814,PodSandboxId:14b6a1c823e4fc18a8054be7ce0cf075403a2526d5ca1c3b2c5ba1e315dfa9f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701980286016770493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-393627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=087dffa5-4fea-4557-bb80-95772e92ceb4 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b1b719199f72       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            9 seconds ago       Running             hello-world-app           0                   4343caa2484ab       hello-world-app-5f5d8b66bb-p2vqx
	551cf4fca4c6c       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   fe685bc90cf98       nginx
	aa634228d1efa       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   2ece5d11ae06c       ingress-nginx-controller-7fcf777cb7-wbcrx
	ebc97397240a6       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   71f1802816a9c       ingress-nginx-admission-patch-5xzg2
	6decc21c959d7       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   5a842e2017980       ingress-nginx-admission-create-5jmhd
	5f18a1d5c78b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   acdf7be70183b       storage-provisioner
	f772ff4685628       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   a7c16e80709c3       kube-proxy-k65fs
	dc1e4f41bcc20       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   e0c738ba499e0       coredns-66bff467f8-vwzj7
	bb5b6033e8466       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   44411f2559c9f       coredns-66bff467f8-t8qfb
	7c2cae134f80a       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   59baba4f46417       etcd-ingress-addon-legacy-393627
	a0039ff8bb089       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   899ad6ad486ad       kube-apiserver-ingress-addon-legacy-393627
	755b01a4e0aaf       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   bf7d4999aaa87       kube-scheduler-ingress-addon-legacy-393627
	36bb1bf509fbd       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   14b6a1c823e4f       kube-controller-manager-ingress-addon-legacy-393627
	
	* 
	* ==> coredns [bb5b6033e84663e1266b69a5c4a5757ff01fc38707ae27cf41c05644005a3cea] <==
	* [INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48317 - 48255 "HINFO IN 5163627724105346414.3417843329728547574. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021729575s
	[INFO] 10.244.0.6:52313 - 52141 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000562406s
	[INFO] 10.244.0.6:52313 - 16910 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122203s
	[INFO] 10.244.0.6:52313 - 15326 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0003109s
	[INFO] 10.244.0.6:52313 - 41452 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00009331s
	[INFO] 10.244.0.6:52313 - 31827 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115759s
	[INFO] 10.244.0.6:52313 - 46074 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000105463s
	[INFO] 10.244.0.6:52313 - 42044 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248552s
	[INFO] 10.244.0.6:48539 - 31255 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058594s
	[INFO] 10.244.0.6:48539 - 22656 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045991s
	[INFO] 10.244.0.6:48539 - 49717 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049922s
	[INFO] 10.244.0.6:48539 - 23025 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038424s
	[INFO] 10.244.0.6:48539 - 64037 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031913s
	[INFO] 10.244.0.6:48539 - 47145 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116276s
	[INFO] 10.244.0.6:48539 - 2230 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088877s
	[INFO] 10.244.0.6:59776 - 63692 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081096s
	[INFO] 10.244.0.6:59776 - 57995 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122575s
	[INFO] 10.244.0.6:59776 - 2907 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036879s
	[INFO] 10.244.0.6:59776 - 32475 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003421s
	[INFO] 10.244.0.6:59776 - 61193 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032515s
	[INFO] 10.244.0.6:59776 - 37958 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069512s
	[INFO] 10.244.0.6:59776 - 37092 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00020766s
	
	* 
	* ==> coredns [dc1e4f41bcc20f7c222a63cb04654c88893a5232527eb502ef2285ebf7502128] <==
	* CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	[INFO] Reloading complete
	[INFO] 10.244.0.6:53681 - 34105 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000286505s
	[INFO] 10.244.0.6:53681 - 25718 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000771083s
	[INFO] 10.244.0.6:53681 - 40319 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113029s
	[INFO] 10.244.0.6:53681 - 60562 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000472485s
	[INFO] 10.244.0.6:53681 - 3315 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000155831s
	[INFO] 10.244.0.6:53681 - 13555 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106064s
	[INFO] 10.244.0.6:53681 - 44891 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010026s
	I1207 20:19:01.213293       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-07 20:18:31.212547683 +0000 UTC m=+0.044792743) (total time: 30.000634182s):
	Trace[2019727887]: [30.000634182s] [30.000634182s] END
	E1207 20:19:01.213367       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1207 20:19:01.213673       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-07 20:18:31.212948089 +0000 UTC m=+0.045193151) (total time: 30.000692402s):
	Trace[1427131847]: [30.000692402s] [30.000692402s] END
	E1207 20:19:01.213710       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1207 20:19:01.213982       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-07 20:18:31.213271174 +0000 UTC m=+0.045516239) (total time: 30.000699371s):
	Trace[939984059]: [30.000699371s] [30.000699371s] END
	E1207 20:19:01.214013       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-393627
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-393627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=ingress-addon-legacy-393627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_18_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:18:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-393627
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:22:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:22:15 +0000   Thu, 07 Dec 2023 20:18:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:22:15 +0000   Thu, 07 Dec 2023 20:18:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:22:15 +0000   Thu, 07 Dec 2023 20:18:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:22:15 +0000   Thu, 07 Dec 2023 20:18:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    ingress-addon-legacy-393627
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adc5f2fac534af083e92f918f32879b
	  System UUID:                9adc5f2f-ac53-4af0-83e9-2f918f32879b
	  Boot ID:                    c85e930a-6ed1-48aa-9510-093d7c6ee988
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-p2vqx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-66bff467f8-t8qfb                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m55s
	  kube-system                 coredns-66bff467f8-vwzj7                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m55s
	  kube-system                 etcd-ingress-addon-legacy-393627                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-apiserver-ingress-addon-legacy-393627             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-393627    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-k65fs                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-scheduler-ingress-addon-legacy-393627             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m9s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s   kubelet     Node ingress-addon-legacy-393627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s   kubelet     Node ingress-addon-legacy-393627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s   kubelet     Node ingress-addon-legacy-393627 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m59s  kubelet     Node ingress-addon-legacy-393627 status is now: NodeReady
	  Normal  Starting                 3m53s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093679] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.394778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.383954] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150417] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.027369] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.915324] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.097328] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.141783] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.104189] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.194950] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Dec 7 20:18] systemd-fstab-generator[1031]: Ignoring "noauto" for root device
	[  +3.175765] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.945637] systemd-fstab-generator[1437]: Ignoring "noauto" for root device
	[ +15.864846] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 7 20:19] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.374764] kauditd_printk_skb: 4 callbacks suppressed
	[ +27.972778] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.815955] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 7 20:22] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [7c2cae134f80a5bd93d6793e23333cc7412a50ff5ef9b55609189cab586a5537] <==
	* 2023-12-07 20:18:07.686572 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-07 20:18:07.690070 I | etcdserver: 46b6e3fd62fd4110 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/07 20:18:07 INFO: 46b6e3fd62fd4110 switched to configuration voters=(5095510705843290384)
	2023-12-07 20:18:07.690661 I | etcdserver/membership: added member 46b6e3fd62fd4110 [https://192.168.39.25:2380] to cluster f5f955826d71045b
	2023-12-07 20:18:07.691040 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-07 20:18:07.691197 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-07 20:18:07.691256 I | embed: listening for peers on 192.168.39.25:2380
	raft2023/12/07 20:18:08 INFO: 46b6e3fd62fd4110 is starting a new election at term 1
	raft2023/12/07 20:18:08 INFO: 46b6e3fd62fd4110 became candidate at term 2
	raft2023/12/07 20:18:08 INFO: 46b6e3fd62fd4110 received MsgVoteResp from 46b6e3fd62fd4110 at term 2
	raft2023/12/07 20:18:08 INFO: 46b6e3fd62fd4110 became leader at term 2
	raft2023/12/07 20:18:08 INFO: raft.node: 46b6e3fd62fd4110 elected leader 46b6e3fd62fd4110 at term 2
	2023-12-07 20:18:08.575219 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-07 20:18:08.576700 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-07 20:18:08.576779 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-07 20:18:08.576803 I | etcdserver: published {Name:ingress-addon-legacy-393627 ClientURLs:[https://192.168.39.25:2379]} to cluster f5f955826d71045b
	2023-12-07 20:18:08.576963 I | embed: ready to serve client requests
	2023-12-07 20:18:08.577468 I | embed: ready to serve client requests
	2023-12-07 20:18:08.578251 I | embed: serving client requests on 192.168.39.25:2379
	2023-12-07 20:18:08.579404 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-07 20:18:29.591781 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (158.837955ms) to execute
	2023-12-07 20:18:29.592121 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (365.883884ms) to execute
	2023-12-07 20:18:29.592283 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/job-controller\" " with result "range_response_count:1 size:195" took too long (531.610928ms) to execute
	2023-12-07 20:19:33.579946 W | etcdserver: request "header:<ID:4688401444069621769 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.25\" mod_revision:524 > success:<request_put:<key:\"/registry/masterleases/192.168.39.25\" value_size:68 lease:4688401444069621767 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.25\" > >>" with result "size:16" took too long (293.128858ms) to execute
	2023-12-07 20:19:56.803627 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (141.269224ms) to execute
	
	* 
	* ==> kernel <==
	*  20:22:24 up 4 min,  0 users,  load average: 0.39, 0.52, 0.26
	Linux ingress-addon-legacy-393627 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a0039ff8bb08920a6356d1b38d71d0c4f05d1ae8bef2f0ee12abc48446ac9865] <==
	* I1207 20:18:11.566596       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:18:11.566668       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:18:11.568020       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:18:11.568294       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1207 20:18:11.597537       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1207 20:18:12.463641       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1207 20:18:12.463782       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1207 20:18:12.469098       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1207 20:18:12.473792       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1207 20:18:12.473845       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1207 20:18:13.005318       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:18:13.067326       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1207 20:18:13.181629       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.25]
	I1207 20:18:13.182281       1 controller.go:609] quota admission added evaluator for: endpoints
	I1207 20:18:13.191231       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:18:13.806128       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1207 20:18:14.908389       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1207 20:18:15.022988       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1207 20:18:15.392863       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:18:29.593931       1 trace.go:116] Trace[437673057]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/job-controller,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.25 (started: 2023-12-07 20:18:29.060177055 +0000 UTC m=+22.536522288) (total time: 533.638527ms):
	Trace[437673057]: [533.595782ms] [533.589572ms] About to write a response
	I1207 20:18:29.709749       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1207 20:18:29.712639       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1207 20:19:12.578210       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1207 20:19:45.848742       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [36bb1bf509fbda993889b98caa7e5cd9e5e731463a305582489e75204abab814] <==
	* I1207 20:18:29.757585       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-393627", UID:"35f02535-d4e7-4e6c-a96a-ad85d1e8b2cd", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-393627 event: Registered Node ingress-addon-legacy-393627 in Controller
	I1207 20:18:29.759116       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"bc1df4db-98d6-4443-9565-6c0d3e7924ad", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-k65fs
	I1207 20:18:29.789753       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f78c50fe-cc36-4da1-8577-320f348221b5", APIVersion:"apps/v1", ResourceVersion:"309", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-vwzj7
	I1207 20:18:29.810332       1 range_allocator.go:373] Set node ingress-addon-legacy-393627 PodCIDR to [10.244.0.0/24]
	I1207 20:18:29.935907       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1207 20:18:29.957535       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I1207 20:18:30.009971       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1207 20:18:30.127743       1 shared_informer.go:230] Caches are synced for job 
	I1207 20:18:30.131887       1 shared_informer.go:230] Caches are synced for resource quota 
	I1207 20:18:30.134273       1 shared_informer.go:230] Caches are synced for resource quota 
	E1207 20:18:30.174662       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E1207 20:18:30.179403       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I1207 20:18:30.202216       1 shared_informer.go:230] Caches are synced for attach detach 
	I1207 20:18:30.220178       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:18:30.224342       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:18:30.224354       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1207 20:19:12.585761       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ea7da4a4-8fdd-44cb-9de2-0f310f6c2d1f", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1207 20:19:12.621281       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"19f5fd39-8edf-4370-ae13-1d56f5a72c2f", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-5jmhd
	I1207 20:19:12.621909       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"47998c14-103f-46f1-9b17-a12d34803a69", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-wbcrx
	I1207 20:19:12.729128       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3dc91db5-c036-4074-97af-1d3feea0034f", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-5xzg2
	I1207 20:19:17.695596       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"19f5fd39-8edf-4370-ae13-1d56f5a72c2f", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:19:19.721303       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3dc91db5-c036-4074-97af-1d3feea0034f", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:22:10.401960       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5ac000c8-a6b9-4896-9fe2-5dba678a30a8", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1207 20:22:10.430011       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"93007bbd-a42c-4ebb-865f-3573ce557f1a", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-p2vqx
	E1207 20:22:20.990370       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-f8l9r" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [f772ff4685628ec719ac16868a5d02879f01085887d45cf07a54ed1038c5a409] <==
	* W1207 20:18:31.906138       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1207 20:18:31.917822       1 node.go:136] Successfully retrieved node IP: 192.168.39.25
	I1207 20:18:31.917918       1 server_others.go:186] Using iptables Proxier.
	I1207 20:18:31.918190       1 server.go:583] Version: v1.18.20
	I1207 20:18:31.920600       1 config.go:315] Starting service config controller
	I1207 20:18:31.920706       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1207 20:18:31.920745       1 config.go:133] Starting endpoints config controller
	I1207 20:18:31.920844       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1207 20:18:32.021021       1 shared_informer.go:230] Caches are synced for service config 
	I1207 20:18:32.021143       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [755b01a4e0aaf0d145fdf8f1b99a08accf6d6c2728b86367fa1603c95dc23abd] <==
	* I1207 20:18:11.562646       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1207 20:18:11.562770       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:18:11.562790       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:18:11.562812       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1207 20:18:11.578314       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:18:11.578640       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:18:11.578734       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:18:11.578942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:18:11.578954       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:18:11.579338       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:18:11.579668       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:18:11.579857       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:18:11.580104       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:18:11.580207       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:18:11.580366       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:18:11.580380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:18:12.405673       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:18:12.411044       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:18:12.489045       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:18:12.630260       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:18:12.680991       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:18:12.724228       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:18:12.750082       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1207 20:18:14.963610       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1207 20:18:30.007222       1 factory.go:503] pod: kube-system/coredns-66bff467f8-vwzj7 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:17:38 UTC, ends at Thu 2023-12-07 20:22:24 UTC. --
	Dec 07 20:19:19 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:19.859147    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4feb13a5-c1de-48bc-afb3-6595ee58ce54-ingress-nginx-admission-token-txxxd" (OuterVolumeSpecName: "ingress-nginx-admission-token-txxxd") pod "4feb13a5-c1de-48bc-afb3-6595ee58ce54" (UID: "4feb13a5-c1de-48bc-afb3-6595ee58ce54"). InnerVolumeSpecName "ingress-nginx-admission-token-txxxd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:19:19 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:19.954642    1444 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-txxxd" (UniqueName: "kubernetes.io/secret/4feb13a5-c1de-48bc-afb3-6595ee58ce54-ingress-nginx-admission-token-txxxd") on node "ingress-addon-legacy-393627" DevicePath ""
	Dec 07 20:19:20 ingress-addon-legacy-393627 kubelet[1444]: W1207 20:19:20.710701    1444 pod_container_deletor.go:77] Container "71f1802816a9c00d5e4b796b6d67e50c3159fa6980f0738ec5f589a3e6bbbdec" not found in pod's containers
	Dec 07 20:19:28 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:28.930026    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 07 20:19:29 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:29.086891    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-l5p42" (UniqueName: "kubernetes.io/secret/72ec9701-9186-41f5-b879-9f7d8debcc67-minikube-ingress-dns-token-l5p42") pod "kube-ingress-dns-minikube" (UID: "72ec9701-9186-41f5-b879-9f7d8debcc67")
	Dec 07 20:19:46 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:46.019474    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 07 20:19:46 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:19:46.143521    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-89j89" (UniqueName: "kubernetes.io/secret/e8b54a24-0cbf-4d6f-83d8-7c9169c1361c-default-token-89j89") pod "nginx" (UID: "e8b54a24-0cbf-4d6f-83d8-7c9169c1361c")
	Dec 07 20:22:10 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:10.451940    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 07 20:22:10 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:10.522944    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-89j89" (UniqueName: "kubernetes.io/secret/bebcca27-270b-4dce-9115-05d15b8c1b2c-default-token-89j89") pod "hello-world-app-5f5d8b66bb-p2vqx" (UID: "bebcca27-270b-4dce-9115-05d15b8c1b2c")
	Dec 07 20:22:11 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:11.936100    1444 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e153921a8bf328305c3e04fa0c201f9b5f6e49e8f74b166f3efcbc9d92d2088b
	Dec 07 20:22:11 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:11.970623    1444 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e153921a8bf328305c3e04fa0c201f9b5f6e49e8f74b166f3efcbc9d92d2088b
	Dec 07 20:22:11 ingress-addon-legacy-393627 kubelet[1444]: E1207 20:22:11.972228    1444 remote_runtime.go:295] ContainerStatus "e153921a8bf328305c3e04fa0c201f9b5f6e49e8f74b166f3efcbc9d92d2088b" from runtime service failed: rpc error: code = NotFound desc = could not find container "e153921a8bf328305c3e04fa0c201f9b5f6e49e8f74b166f3efcbc9d92d2088b": container with ID starting with e153921a8bf328305c3e04fa0c201f9b5f6e49e8f74b166f3efcbc9d92d2088b not found: ID does not exist
	Dec 07 20:22:12 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:12.030622    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-l5p42" (UniqueName: "kubernetes.io/secret/72ec9701-9186-41f5-b879-9f7d8debcc67-minikube-ingress-dns-token-l5p42") pod "72ec9701-9186-41f5-b879-9f7d8debcc67" (UID: "72ec9701-9186-41f5-b879-9f7d8debcc67")
	Dec 07 20:22:12 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:12.043821    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ec9701-9186-41f5-b879-9f7d8debcc67-minikube-ingress-dns-token-l5p42" (OuterVolumeSpecName: "minikube-ingress-dns-token-l5p42") pod "72ec9701-9186-41f5-b879-9f7d8debcc67" (UID: "72ec9701-9186-41f5-b879-9f7d8debcc67"). InnerVolumeSpecName "minikube-ingress-dns-token-l5p42". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:22:12 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:12.130972    1444 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-l5p42" (UniqueName: "kubernetes.io/secret/72ec9701-9186-41f5-b879-9f7d8debcc67-minikube-ingress-dns-token-l5p42") on node "ingress-addon-legacy-393627" DevicePath ""
	Dec 07 20:22:16 ingress-addon-legacy-393627 kubelet[1444]: E1207 20:22:16.350109    1444 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wbcrx.179ea647c37dd8cd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wbcrx", UID:"bf66d889-e5e7-4ea0-85e2-010059ecd24c", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-393627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a83e149528cd, ext:241517653507, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a83e149528cd, ext:241517653507, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wbcrx.179ea647c37dd8cd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:22:16 ingress-addon-legacy-393627 kubelet[1444]: E1207 20:22:16.374165    1444 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wbcrx.179ea647c37dd8cd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wbcrx", UID:"bf66d889-e5e7-4ea0-85e2-010059ecd24c", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-393627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a83e149528cd, ext:241517653507, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a83e15e119af, ext:241539407590, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wbcrx.179ea647c37dd8cd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:22:19 ingress-addon-legacy-393627 kubelet[1444]: W1207 20:22:19.011626    1444 pod_container_deletor.go:77] Container "2ece5d11ae06cb679073b9e7b7bfa0d3b0be769339cba61e9c3359c66ef722cf" not found in pod's containers
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.559569    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-k5vfx" (UniqueName: "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-ingress-nginx-token-k5vfx") pod "bf66d889-e5e7-4ea0-85e2-010059ecd24c" (UID: "bf66d889-e5e7-4ea0-85e2-010059ecd24c")
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.559627    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-webhook-cert") pod "bf66d889-e5e7-4ea0-85e2-010059ecd24c" (UID: "bf66d889-e5e7-4ea0-85e2-010059ecd24c")
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.564817    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "bf66d889-e5e7-4ea0-85e2-010059ecd24c" (UID: "bf66d889-e5e7-4ea0-85e2-010059ecd24c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.564857    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-ingress-nginx-token-k5vfx" (OuterVolumeSpecName: "ingress-nginx-token-k5vfx") pod "bf66d889-e5e7-4ea0-85e2-010059ecd24c" (UID: "bf66d889-e5e7-4ea0-85e2-010059ecd24c"). InnerVolumeSpecName "ingress-nginx-token-k5vfx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.659943    1444 reconciler.go:319] Volume detached for volume "ingress-nginx-token-k5vfx" (UniqueName: "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-ingress-nginx-token-k5vfx") on node "ingress-addon-legacy-393627" DevicePath ""
	Dec 07 20:22:20 ingress-addon-legacy-393627 kubelet[1444]: I1207 20:22:20.659978    1444 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bf66d889-e5e7-4ea0-85e2-010059ecd24c-webhook-cert") on node "ingress-addon-legacy-393627" DevicePath ""
	Dec 07 20:22:21 ingress-addon-legacy-393627 kubelet[1444]: W1207 20:22:21.420076    1444 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/bf66d889-e5e7-4ea0-85e2-010059ecd24c/volumes" does not exist
	
	* 
	* ==> storage-provisioner [5f18a1d5c78b1575e9d62580f1a8b950d3668a2a15e1b6c365e4edea1e28e405] <==
	* I1207 20:18:32.185847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:18:32.198122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:18:32.200101       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:18:32.209151       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:18:32.212401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-393627_bed8b397-28ba-4868-a250-b7627deb4ba6!
	I1207 20:18:32.211035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"558d34f5-3d96-401e-8288-5d14de54b727", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-393627_bed8b397-28ba-4868-a250-b7627deb4ba6 became leader
	I1207 20:18:32.313587       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-393627_bed8b397-28ba-4868-a250-b7627deb4ba6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-393627 -n ingress-addon-legacy-393627
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-393627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- sh -c "ping -c 1 192.168.39.1": exit status 1 (196.849111ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-jbm9q): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- sh -c "ping -c 1 192.168.39.1": exit status 1 (233.806326ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-vllfc): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-660958 -n multinode-660958
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-660958 logs -n 25: (1.601036744s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-109389 ssh -- ls                    | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-109389 ssh --                       | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-109389                           | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	| start   | -p mount-start-2-109389                           | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC |                     |
	|         | --profile mount-start-2-109389                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-109389 ssh -- ls                    | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-109389 ssh --                       | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-109389                           | mount-start-2-109389 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	| delete  | -p mount-start-1-092336                           | mount-start-1-092336 | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:26 UTC |
	| start   | -p multinode-660958                               | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:26 UTC | 07 Dec 23 20:28 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- apply -f                   | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- rollout                    | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- get pods -o                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- get pods -o                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-jbm9q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-vllfc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-jbm9q --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-vllfc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-jbm9q -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-vllfc -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- get pods -o                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-jbm9q                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC |                     |
	|         | busybox-5bc68d56bd-jbm9q -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC | 07 Dec 23 20:28 UTC |
	|         | busybox-5bc68d56bd-vllfc                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-660958 -- exec                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:28 UTC |                     |
	|         | busybox-5bc68d56bd-vllfc -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:26:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:26:49.121629   30218 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:26:49.121757   30218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:26:49.121766   30218 out.go:309] Setting ErrFile to fd 2...
	I1207 20:26:49.121770   30218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:26:49.121945   30218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:26:49.122488   30218 out.go:303] Setting JSON to false
	I1207 20:26:49.123365   30218 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4155,"bootTime":1701976654,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:26:49.123428   30218 start.go:138] virtualization: kvm guest
	I1207 20:26:49.125718   30218 out.go:177] * [multinode-660958] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:26:49.127200   30218 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:26:49.127212   30218 notify.go:220] Checking for updates...
	I1207 20:26:49.128617   30218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:26:49.129988   30218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:26:49.131470   30218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:26:49.132924   30218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:26:49.134284   30218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:26:49.135759   30218 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:26:49.169389   30218 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 20:26:49.170801   30218 start.go:298] selected driver: kvm2
	I1207 20:26:49.170817   30218 start.go:902] validating driver "kvm2" against <nil>
	I1207 20:26:49.170827   30218 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:26:49.171513   30218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:26:49.171594   30218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:26:49.185421   30218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:26:49.185465   30218 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:26:49.185697   30218 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:26:49.185764   30218 cni.go:84] Creating CNI manager for ""
	I1207 20:26:49.185780   30218 cni.go:136] 0 nodes found, recommending kindnet
	I1207 20:26:49.185788   30218 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 20:26:49.185796   30218 start_flags.go:323] config:
	{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:26:49.185976   30218 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:26:49.187926   30218 out.go:177] * Starting control plane node multinode-660958 in cluster multinode-660958
	I1207 20:26:49.189333   30218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:26:49.189387   30218 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:26:49.189401   30218 cache.go:56] Caching tarball of preloaded images
	I1207 20:26:49.189481   30218 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:26:49.189496   30218 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:26:49.189882   30218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:26:49.189907   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json: {Name:mk076e03be1b2c1176b8eacc212fac17a89fcd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:26:49.190085   30218 start.go:365] acquiring machines lock for multinode-660958: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:26:49.190135   30218 start.go:369] acquired machines lock for "multinode-660958" in 30.591µs
	I1207 20:26:49.190174   30218 start.go:93] Provisioning new machine with config: &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:26:49.190248   30218 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 20:26:49.192065   30218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 20:26:49.192197   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:26:49.192241   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:26:49.205742   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I1207 20:26:49.206145   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:26:49.206635   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:26:49.206658   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:26:49.207024   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:26:49.207201   30218 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:26:49.207341   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:26:49.207510   30218 start.go:159] libmachine.API.Create for "multinode-660958" (driver="kvm2")
	I1207 20:26:49.207558   30218 client.go:168] LocalClient.Create starting
	I1207 20:26:49.207599   30218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 20:26:49.207638   30218 main.go:141] libmachine: Decoding PEM data...
	I1207 20:26:49.207654   30218 main.go:141] libmachine: Parsing certificate...
	I1207 20:26:49.207701   30218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 20:26:49.207719   30218 main.go:141] libmachine: Decoding PEM data...
	I1207 20:26:49.207733   30218 main.go:141] libmachine: Parsing certificate...
	I1207 20:26:49.207748   30218 main.go:141] libmachine: Running pre-create checks...
	I1207 20:26:49.207763   30218 main.go:141] libmachine: (multinode-660958) Calling .PreCreateCheck
	I1207 20:26:49.208106   30218 main.go:141] libmachine: (multinode-660958) Calling .GetConfigRaw
	I1207 20:26:49.208477   30218 main.go:141] libmachine: Creating machine...
	I1207 20:26:49.208491   30218 main.go:141] libmachine: (multinode-660958) Calling .Create
	I1207 20:26:49.208636   30218 main.go:141] libmachine: (multinode-660958) Creating KVM machine...
	I1207 20:26:49.209830   30218 main.go:141] libmachine: (multinode-660958) DBG | found existing default KVM network
	I1207 20:26:49.210800   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:49.210325   30241 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1207 20:26:49.216675   30218 main.go:141] libmachine: (multinode-660958) DBG | trying to create private KVM network mk-multinode-660958 192.168.39.0/24...
	I1207 20:26:49.291538   30218 main.go:141] libmachine: (multinode-660958) DBG | private KVM network mk-multinode-660958 192.168.39.0/24 created
	I1207 20:26:49.291571   30218 main.go:141] libmachine: (multinode-660958) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958 ...
	I1207 20:26:49.291592   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:49.291513   30241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:26:49.291611   30218 main.go:141] libmachine: (multinode-660958) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 20:26:49.291654   30218 main.go:141] libmachine: (multinode-660958) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 20:26:49.493666   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:49.493524   30241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa...
	I1207 20:26:49.685010   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:49.684875   30241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/multinode-660958.rawdisk...
	I1207 20:26:49.685043   30218 main.go:141] libmachine: (multinode-660958) DBG | Writing magic tar header
	I1207 20:26:49.685066   30218 main.go:141] libmachine: (multinode-660958) DBG | Writing SSH key tar header
	I1207 20:26:49.685079   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:49.684993   30241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958 ...
	I1207 20:26:49.685105   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958
	I1207 20:26:49.685176   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958 (perms=drwx------)
	I1207 20:26:49.685202   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 20:26:49.685214   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 20:26:49.685232   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:26:49.685246   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 20:26:49.685260   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 20:26:49.685277   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 20:26:49.685287   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 20:26:49.685312   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 20:26:49.685331   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home/jenkins
	I1207 20:26:49.685341   30218 main.go:141] libmachine: (multinode-660958) DBG | Checking permissions on dir: /home
	I1207 20:26:49.685355   30218 main.go:141] libmachine: (multinode-660958) DBG | Skipping /home - not owner
	I1207 20:26:49.685406   30218 main.go:141] libmachine: (multinode-660958) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 20:26:49.685441   30218 main.go:141] libmachine: (multinode-660958) Creating domain...
	I1207 20:26:49.686329   30218 main.go:141] libmachine: (multinode-660958) define libvirt domain using xml: 
	I1207 20:26:49.686341   30218 main.go:141] libmachine: (multinode-660958) <domain type='kvm'>
	I1207 20:26:49.686348   30218 main.go:141] libmachine: (multinode-660958)   <name>multinode-660958</name>
	I1207 20:26:49.686356   30218 main.go:141] libmachine: (multinode-660958)   <memory unit='MiB'>2200</memory>
	I1207 20:26:49.686423   30218 main.go:141] libmachine: (multinode-660958)   <vcpu>2</vcpu>
	I1207 20:26:49.686447   30218 main.go:141] libmachine: (multinode-660958)   <features>
	I1207 20:26:49.686457   30218 main.go:141] libmachine: (multinode-660958)     <acpi/>
	I1207 20:26:49.686466   30218 main.go:141] libmachine: (multinode-660958)     <apic/>
	I1207 20:26:49.686491   30218 main.go:141] libmachine: (multinode-660958)     <pae/>
	I1207 20:26:49.686513   30218 main.go:141] libmachine: (multinode-660958)     
	I1207 20:26:49.686530   30218 main.go:141] libmachine: (multinode-660958)   </features>
	I1207 20:26:49.686542   30218 main.go:141] libmachine: (multinode-660958)   <cpu mode='host-passthrough'>
	I1207 20:26:49.686554   30218 main.go:141] libmachine: (multinode-660958)   
	I1207 20:26:49.686566   30218 main.go:141] libmachine: (multinode-660958)   </cpu>
	I1207 20:26:49.686602   30218 main.go:141] libmachine: (multinode-660958)   <os>
	I1207 20:26:49.686626   30218 main.go:141] libmachine: (multinode-660958)     <type>hvm</type>
	I1207 20:26:49.686638   30218 main.go:141] libmachine: (multinode-660958)     <boot dev='cdrom'/>
	I1207 20:26:49.686651   30218 main.go:141] libmachine: (multinode-660958)     <boot dev='hd'/>
	I1207 20:26:49.686664   30218 main.go:141] libmachine: (multinode-660958)     <bootmenu enable='no'/>
	I1207 20:26:49.686674   30218 main.go:141] libmachine: (multinode-660958)   </os>
	I1207 20:26:49.686688   30218 main.go:141] libmachine: (multinode-660958)   <devices>
	I1207 20:26:49.686713   30218 main.go:141] libmachine: (multinode-660958)     <disk type='file' device='cdrom'>
	I1207 20:26:49.686733   30218 main.go:141] libmachine: (multinode-660958)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/boot2docker.iso'/>
	I1207 20:26:49.686749   30218 main.go:141] libmachine: (multinode-660958)       <target dev='hdc' bus='scsi'/>
	I1207 20:26:49.686763   30218 main.go:141] libmachine: (multinode-660958)       <readonly/>
	I1207 20:26:49.686779   30218 main.go:141] libmachine: (multinode-660958)     </disk>
	I1207 20:26:49.686795   30218 main.go:141] libmachine: (multinode-660958)     <disk type='file' device='disk'>
	I1207 20:26:49.686811   30218 main.go:141] libmachine: (multinode-660958)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 20:26:49.686831   30218 main.go:141] libmachine: (multinode-660958)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/multinode-660958.rawdisk'/>
	I1207 20:26:49.686844   30218 main.go:141] libmachine: (multinode-660958)       <target dev='hda' bus='virtio'/>
	I1207 20:26:49.686856   30218 main.go:141] libmachine: (multinode-660958)     </disk>
	I1207 20:26:49.686871   30218 main.go:141] libmachine: (multinode-660958)     <interface type='network'>
	I1207 20:26:49.686890   30218 main.go:141] libmachine: (multinode-660958)       <source network='mk-multinode-660958'/>
	I1207 20:26:49.686904   30218 main.go:141] libmachine: (multinode-660958)       <model type='virtio'/>
	I1207 20:26:49.686917   30218 main.go:141] libmachine: (multinode-660958)     </interface>
	I1207 20:26:49.686931   30218 main.go:141] libmachine: (multinode-660958)     <interface type='network'>
	I1207 20:26:49.686943   30218 main.go:141] libmachine: (multinode-660958)       <source network='default'/>
	I1207 20:26:49.686957   30218 main.go:141] libmachine: (multinode-660958)       <model type='virtio'/>
	I1207 20:26:49.686969   30218 main.go:141] libmachine: (multinode-660958)     </interface>
	I1207 20:26:49.686986   30218 main.go:141] libmachine: (multinode-660958)     <serial type='pty'>
	I1207 20:26:49.687001   30218 main.go:141] libmachine: (multinode-660958)       <target port='0'/>
	I1207 20:26:49.687013   30218 main.go:141] libmachine: (multinode-660958)     </serial>
	I1207 20:26:49.687026   30218 main.go:141] libmachine: (multinode-660958)     <console type='pty'>
	I1207 20:26:49.687041   30218 main.go:141] libmachine: (multinode-660958)       <target type='serial' port='0'/>
	I1207 20:26:49.687053   30218 main.go:141] libmachine: (multinode-660958)     </console>
	I1207 20:26:49.687066   30218 main.go:141] libmachine: (multinode-660958)     <rng model='virtio'>
	I1207 20:26:49.687082   30218 main.go:141] libmachine: (multinode-660958)       <backend model='random'>/dev/random</backend>
	I1207 20:26:49.687098   30218 main.go:141] libmachine: (multinode-660958)     </rng>
	I1207 20:26:49.687110   30218 main.go:141] libmachine: (multinode-660958)     
	I1207 20:26:49.687128   30218 main.go:141] libmachine: (multinode-660958)     
	I1207 20:26:49.687140   30218 main.go:141] libmachine: (multinode-660958)   </devices>
	I1207 20:26:49.687160   30218 main.go:141] libmachine: (multinode-660958) </domain>
	I1207 20:26:49.687180   30218 main.go:141] libmachine: (multinode-660958) 
	I1207 20:26:49.690971   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:b7:77:fe in network default
	I1207 20:26:49.691487   30218 main.go:141] libmachine: (multinode-660958) Ensuring networks are active...
	I1207 20:26:49.691502   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:49.692072   30218 main.go:141] libmachine: (multinode-660958) Ensuring network default is active
	I1207 20:26:49.692374   30218 main.go:141] libmachine: (multinode-660958) Ensuring network mk-multinode-660958 is active
	I1207 20:26:49.692913   30218 main.go:141] libmachine: (multinode-660958) Getting domain xml...
	I1207 20:26:49.693670   30218 main.go:141] libmachine: (multinode-660958) Creating domain...
	I1207 20:26:50.898839   30218 main.go:141] libmachine: (multinode-660958) Waiting to get IP...
	I1207 20:26:50.899608   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:50.899987   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:50.900007   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:50.899961   30241 retry.go:31] will retry after 310.597691ms: waiting for machine to come up
	I1207 20:26:51.212793   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:51.213214   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:51.213243   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:51.213167   30241 retry.go:31] will retry after 379.246449ms: waiting for machine to come up
	I1207 20:26:51.593692   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:51.594137   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:51.594163   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:51.594105   30241 retry.go:31] will retry after 474.154116ms: waiting for machine to come up
	I1207 20:26:52.069791   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:52.070228   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:52.070249   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:52.070177   30241 retry.go:31] will retry after 565.921558ms: waiting for machine to come up
	I1207 20:26:52.637908   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:52.638336   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:52.638356   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:52.638308   30241 retry.go:31] will retry after 729.44272ms: waiting for machine to come up
	I1207 20:26:53.369174   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:53.369542   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:53.369571   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:53.369486   30241 retry.go:31] will retry after 735.775667ms: waiting for machine to come up
	I1207 20:26:54.106488   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:54.106984   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:54.107013   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:54.106954   30241 retry.go:31] will retry after 1.088075759s: waiting for machine to come up
	I1207 20:26:55.196833   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:55.197272   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:55.197303   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:55.197224   30241 retry.go:31] will retry after 1.441589411s: waiting for machine to come up
	I1207 20:26:56.640849   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:56.641258   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:56.641289   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:56.641202   30241 retry.go:31] will retry after 1.726272812s: waiting for machine to come up
	I1207 20:26:58.369888   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:26:58.370223   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:26:58.370253   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:26:58.370166   30241 retry.go:31] will retry after 2.050813418s: waiting for machine to come up
	I1207 20:27:00.423085   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:00.423520   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:27:00.423572   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:27:00.423489   30241 retry.go:31] will retry after 1.969427553s: waiting for machine to come up
	I1207 20:27:02.395143   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:02.395604   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:27:02.395638   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:27:02.395546   30241 retry.go:31] will retry after 2.786938563s: waiting for machine to come up
	I1207 20:27:05.184536   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:05.184976   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:27:05.185003   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:27:05.184941   30241 retry.go:31] will retry after 3.888630008s: waiting for machine to come up
	I1207 20:27:09.078081   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:09.078551   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:27:09.078578   30218 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:27:09.078518   30241 retry.go:31] will retry after 5.322225561s: waiting for machine to come up
	I1207 20:27:14.401773   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.402194   30218 main.go:141] libmachine: (multinode-660958) Found IP for machine: 192.168.39.19
	I1207 20:27:14.402222   30218 main.go:141] libmachine: (multinode-660958) Reserving static IP address...
	I1207 20:27:14.402241   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has current primary IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.402566   30218 main.go:141] libmachine: (multinode-660958) DBG | unable to find host DHCP lease matching {name: "multinode-660958", mac: "52:54:00:f5:93:7e", ip: "192.168.39.19"} in network mk-multinode-660958
	I1207 20:27:14.471694   30218 main.go:141] libmachine: (multinode-660958) DBG | Getting to WaitForSSH function...
	I1207 20:27:14.471726   30218 main.go:141] libmachine: (multinode-660958) Reserved static IP address: 192.168.39.19
	I1207 20:27:14.471740   30218 main.go:141] libmachine: (multinode-660958) Waiting for SSH to be available...
	I1207 20:27:14.474398   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.474812   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:14.474843   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.475000   30218 main.go:141] libmachine: (multinode-660958) DBG | Using SSH client type: external
	I1207 20:27:14.475025   30218 main.go:141] libmachine: (multinode-660958) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa (-rw-------)
	I1207 20:27:14.475061   30218 main.go:141] libmachine: (multinode-660958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:27:14.475076   30218 main.go:141] libmachine: (multinode-660958) DBG | About to run SSH command:
	I1207 20:27:14.475089   30218 main.go:141] libmachine: (multinode-660958) DBG | exit 0
	I1207 20:27:14.561996   30218 main.go:141] libmachine: (multinode-660958) DBG | SSH cmd err, output: <nil>: 
	I1207 20:27:14.562233   30218 main.go:141] libmachine: (multinode-660958) KVM machine creation complete!
	I1207 20:27:14.562540   30218 main.go:141] libmachine: (multinode-660958) Calling .GetConfigRaw
	I1207 20:27:14.563094   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:14.563276   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:14.563407   30218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 20:27:14.563419   30218 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:27:14.564571   30218 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 20:27:14.564583   30218 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 20:27:14.564590   30218 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 20:27:14.564596   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:14.566520   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.566836   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:14.566873   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.566953   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:14.567109   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.567221   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.567327   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:14.567453   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:14.567864   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:14.567878   30218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 20:27:14.677324   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:27:14.677350   30218 main.go:141] libmachine: Detecting the provisioner...
	I1207 20:27:14.677363   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:14.679995   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.680334   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:14.680360   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.680504   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:14.680704   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.680871   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.681037   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:14.681221   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:14.681526   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:14.681537   30218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 20:27:14.790628   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 20:27:14.790694   30218 main.go:141] libmachine: found compatible host: buildroot
	I1207 20:27:14.790713   30218 main.go:141] libmachine: Provisioning with buildroot...
	I1207 20:27:14.790727   30218 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:27:14.790985   30218 buildroot.go:166] provisioning hostname "multinode-660958"
	I1207 20:27:14.791014   30218 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:27:14.791170   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:14.793367   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.793660   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:14.793695   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.793826   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:14.794007   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.794162   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.794278   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:14.794409   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:14.794716   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:14.794731   30218 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958 && echo "multinode-660958" | sudo tee /etc/hostname
	I1207 20:27:14.916689   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-660958
	
	I1207 20:27:14.916722   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:14.919136   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.919508   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:14.919539   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:14.919663   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:14.919824   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.919960   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:14.920076   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:14.920253   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:14.920567   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:14.920591   30218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-660958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-660958/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-660958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:27:15.037448   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:27:15.037491   30218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:27:15.037513   30218 buildroot.go:174] setting up certificates
	I1207 20:27:15.037526   30218 provision.go:83] configureAuth start
	I1207 20:27:15.037544   30218 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:27:15.037808   30218 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:27:15.039995   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.040358   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.040387   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.040532   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.042482   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.042776   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.042805   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.042903   30218 provision.go:138] copyHostCerts
	I1207 20:27:15.042933   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:27:15.042971   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:27:15.042983   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:27:15.043065   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:27:15.043179   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:27:15.043207   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:27:15.043214   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:27:15.043257   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:27:15.043317   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:27:15.043340   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:27:15.043348   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:27:15.043380   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:27:15.043481   30218 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.multinode-660958 san=[192.168.39.19 192.168.39.19 localhost 127.0.0.1 minikube multinode-660958]
	I1207 20:27:15.127262   30218 provision.go:172] copyRemoteCerts
	I1207 20:27:15.127345   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:27:15.127375   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.129961   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.130234   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.130265   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.130489   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.130675   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.130837   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.130957   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:15.217210   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:27:15.217272   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:27:15.242368   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:27:15.242427   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1207 20:27:15.266266   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:27:15.266324   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 20:27:15.291363   30218 provision.go:86] duration metric: configureAuth took 253.822294ms
	I1207 20:27:15.291391   30218 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:27:15.291612   30218 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:27:15.291703   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.294153   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.294455   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.294479   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.294761   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.294999   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.295174   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.295303   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.295499   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:15.295832   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:15.295854   30218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:27:15.622869   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:27:15.622889   30218 main.go:141] libmachine: Checking connection to Docker...
	I1207 20:27:15.622898   30218 main.go:141] libmachine: (multinode-660958) Calling .GetURL
	I1207 20:27:15.624386   30218 main.go:141] libmachine: (multinode-660958) DBG | Using libvirt version 6000000
	I1207 20:27:15.626749   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.627030   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.627061   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.627255   30218 main.go:141] libmachine: Docker is up and running!
	I1207 20:27:15.627270   30218 main.go:141] libmachine: Reticulating splines...
	I1207 20:27:15.627275   30218 client.go:171] LocalClient.Create took 26.419707989s
	I1207 20:27:15.627296   30218 start.go:167] duration metric: libmachine.API.Create for "multinode-660958" took 26.419786289s
	I1207 20:27:15.627302   30218 start.go:300] post-start starting for "multinode-660958" (driver="kvm2")
	I1207 20:27:15.627313   30218 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:27:15.627328   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:15.627585   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:27:15.627615   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.629607   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.629910   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.629947   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.630175   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.630328   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.630516   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.630689   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:15.716540   30218 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:27:15.720558   30218 command_runner.go:130] > NAME=Buildroot
	I1207 20:27:15.720578   30218 command_runner.go:130] > VERSION=2021.02.12-1-ge2b7375-dirty
	I1207 20:27:15.720584   30218 command_runner.go:130] > ID=buildroot
	I1207 20:27:15.720590   30218 command_runner.go:130] > VERSION_ID=2021.02.12
	I1207 20:27:15.720595   30218 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1207 20:27:15.720632   30218 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:27:15.720642   30218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:27:15.720706   30218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:27:15.720782   30218 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:27:15.720793   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:27:15.720905   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:27:15.729878   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:27:15.752245   30218 start.go:303] post-start completed in 124.929056ms
	I1207 20:27:15.752303   30218 main.go:141] libmachine: (multinode-660958) Calling .GetConfigRaw
	I1207 20:27:15.752859   30218 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:27:15.755666   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.756025   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.756054   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.756359   30218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:27:15.756529   30218 start.go:128] duration metric: createHost completed in 26.566268651s
	I1207 20:27:15.756550   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.758703   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.759127   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.759161   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.759303   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.759490   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.759653   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.759810   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.759974   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:27:15.760276   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:27:15.760289   30218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:27:15.870689   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701980835.843344658
	
	I1207 20:27:15.870712   30218 fix.go:206] guest clock: 1701980835.843344658
	I1207 20:27:15.870721   30218 fix.go:219] Guest: 2023-12-07 20:27:15.843344658 +0000 UTC Remote: 2023-12-07 20:27:15.75654006 +0000 UTC m=+26.682323143 (delta=86.804598ms)
	I1207 20:27:15.870769   30218 fix.go:190] guest clock delta is within tolerance: 86.804598ms
	I1207 20:27:15.870775   30218 start.go:83] releasing machines lock for "multinode-660958", held for 26.680626834s
	I1207 20:27:15.870793   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:15.871045   30218 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:27:15.873941   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.874283   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.874312   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.874426   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:15.874870   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:15.875037   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:15.875102   30218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:27:15.875129   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.879881   30218 ssh_runner.go:195] Run: cat /version.json
	I1207 20:27:15.879908   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:15.882642   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.882758   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.882990   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.883017   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.883045   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:15.883067   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:15.883135   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.883324   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.883338   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:15.883495   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.883500   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:15.883642   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:15.883704   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:15.883839   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:15.987888   30218 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1207 20:27:15.988719   30218 command_runner.go:130] > {"iso_version": "v1.32.1-1701788780-17711", "kicbase_version": "v0.0.42-1701685682-17711", "minikube_version": "v1.32.0", "commit": "3d3a6783269a57f5d9691dd9fa861c5802b7a18b"}
	I1207 20:27:15.988884   30218 ssh_runner.go:195] Run: systemctl --version
	I1207 20:27:15.994577   30218 command_runner.go:130] > systemd 247 (247)
	I1207 20:27:15.994595   30218 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1207 20:27:15.994643   30218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:27:16.154073   30218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:27:16.160129   30218 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1207 20:27:16.160177   30218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:27:16.160237   30218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:27:16.174230   30218 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1207 20:27:16.174557   30218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:27:16.174592   30218 start.go:475] detecting cgroup driver to use...
	I1207 20:27:16.174671   30218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:27:16.188677   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:27:16.200482   30218 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:27:16.200533   30218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:27:16.212655   30218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:27:16.224555   30218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:27:16.325340   30218 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1207 20:27:16.325430   30218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:27:16.339143   30218 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1207 20:27:16.444830   30218 docker.go:219] disabling docker service ...
	I1207 20:27:16.444911   30218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:27:16.458639   30218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:27:16.470509   30218 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1207 20:27:16.470582   30218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:27:16.581392   30218 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1207 20:27:16.581491   30218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:27:16.594375   30218 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1207 20:27:16.594703   30218 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1207 20:27:16.685371   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:27:16.698874   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:27:16.715553   30218 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1207 20:27:16.715598   30218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:27:16.715649   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:27:16.725487   30218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:27:16.725537   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:27:16.735698   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:27:16.745219   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:27:16.754927   30218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:27:16.764838   30218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:27:16.773491   30218 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:27:16.773597   30218 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:27:16.773646   30218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:27:16.786978   30218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:27:16.795825   30218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:27:16.899995   30218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:27:17.065201   30218 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:27:17.065273   30218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:27:17.070163   30218 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1207 20:27:17.070188   30218 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1207 20:27:17.070195   30218 command_runner.go:130] > Device: 16h/22d	Inode: 758         Links: 1
	I1207 20:27:17.070202   30218 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:27:17.070209   30218 command_runner.go:130] > Access: 2023-12-07 20:27:17.024533040 +0000
	I1207 20:27:17.070218   30218 command_runner.go:130] > Modify: 2023-12-07 20:27:17.024533040 +0000
	I1207 20:27:17.070228   30218 command_runner.go:130] > Change: 2023-12-07 20:27:17.024533040 +0000
	I1207 20:27:17.070235   30218 command_runner.go:130] >  Birth: -
	I1207 20:27:17.070253   30218 start.go:543] Will wait 60s for crictl version
	I1207 20:27:17.070291   30218 ssh_runner.go:195] Run: which crictl
	I1207 20:27:17.074108   30218 command_runner.go:130] > /usr/bin/crictl
	I1207 20:27:17.074170   30218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:27:17.114976   30218 command_runner.go:130] > Version:  0.1.0
	I1207 20:27:17.115022   30218 command_runner.go:130] > RuntimeName:  cri-o
	I1207 20:27:17.115031   30218 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1207 20:27:17.115040   30218 command_runner.go:130] > RuntimeApiVersion:  v1
	I1207 20:27:17.115057   30218 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:27:17.115140   30218 ssh_runner.go:195] Run: crio --version
	I1207 20:27:17.165991   30218 command_runner.go:130] > crio version 1.24.1
	I1207 20:27:17.166017   30218 command_runner.go:130] > Version:          1.24.1
	I1207 20:27:17.166028   30218 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:27:17.166034   30218 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:27:17.166042   30218 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:27:17.166049   30218 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:27:17.166056   30218 command_runner.go:130] > Compiler:         gc
	I1207 20:27:17.166066   30218 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:27:17.166075   30218 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:27:17.166093   30218 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:27:17.166103   30218 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:27:17.166111   30218 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:27:17.166197   30218 ssh_runner.go:195] Run: crio --version
	I1207 20:27:17.210740   30218 command_runner.go:130] > crio version 1.24.1
	I1207 20:27:17.210762   30218 command_runner.go:130] > Version:          1.24.1
	I1207 20:27:17.210771   30218 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:27:17.210776   30218 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:27:17.210788   30218 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:27:17.210796   30218 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:27:17.210803   30218 command_runner.go:130] > Compiler:         gc
	I1207 20:27:17.210809   30218 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:27:17.210832   30218 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:27:17.210845   30218 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:27:17.210852   30218 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:27:17.210860   30218 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:27:17.214321   30218 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:27:17.216006   30218 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:27:17.218736   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:17.219017   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:17.219043   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:17.219227   30218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:27:17.223069   30218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:27:17.234474   30218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:27:17.234522   30218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:27:17.267035   30218 command_runner.go:130] > {
	I1207 20:27:17.267055   30218 command_runner.go:130] >   "images": [
	I1207 20:27:17.267060   30218 command_runner.go:130] >   ]
	I1207 20:27:17.267064   30218 command_runner.go:130] > }
	I1207 20:27:17.267186   30218 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 20:27:17.267251   30218 ssh_runner.go:195] Run: which lz4
	I1207 20:27:17.270846   30218 command_runner.go:130] > /usr/bin/lz4
	I1207 20:27:17.270882   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1207 20:27:17.270966   30218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:27:17.274910   30218 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:27:17.275002   30218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:27:17.275023   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 20:27:19.056694   30218 crio.go:444] Took 1.785757 seconds to copy over tarball
	I1207 20:27:19.056759   30218 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:27:22.095260   30218 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.038473833s)
	I1207 20:27:22.095290   30218 crio.go:451] Took 3.038574 seconds to extract the tarball
	I1207 20:27:22.095299   30218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:27:22.136993   30218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:27:22.206398   30218 command_runner.go:130] > {
	I1207 20:27:22.206418   30218 command_runner.go:130] >   "images": [
	I1207 20:27:22.206424   30218 command_runner.go:130] >     {
	I1207 20:27:22.206435   30218 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1207 20:27:22.206443   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.206451   30218 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1207 20:27:22.206456   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206462   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.206474   30218 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1207 20:27:22.206486   30218 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1207 20:27:22.206497   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206505   30218 command_runner.go:130] >       "size": "65258016",
	I1207 20:27:22.206516   30218 command_runner.go:130] >       "uid": null,
	I1207 20:27:22.206528   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.206538   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.206546   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.206553   30218 command_runner.go:130] >     },
	I1207 20:27:22.206560   30218 command_runner.go:130] >     {
	I1207 20:27:22.206572   30218 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1207 20:27:22.206593   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.206606   30218 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1207 20:27:22.206613   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206622   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.206636   30218 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1207 20:27:22.206654   30218 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1207 20:27:22.206663   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206679   30218 command_runner.go:130] >       "size": "31470524",
	I1207 20:27:22.206689   30218 command_runner.go:130] >       "uid": null,
	I1207 20:27:22.206699   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.206709   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.206718   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.206731   30218 command_runner.go:130] >     },
	I1207 20:27:22.206741   30218 command_runner.go:130] >     {
	I1207 20:27:22.206753   30218 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1207 20:27:22.206763   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.206772   30218 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1207 20:27:22.206782   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206788   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.206799   30218 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1207 20:27:22.206814   30218 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1207 20:27:22.206824   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206833   30218 command_runner.go:130] >       "size": "53621675",
	I1207 20:27:22.206841   30218 command_runner.go:130] >       "uid": null,
	I1207 20:27:22.206847   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.206854   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.206864   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.206871   30218 command_runner.go:130] >     },
	I1207 20:27:22.206877   30218 command_runner.go:130] >     {
	I1207 20:27:22.206891   30218 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1207 20:27:22.206906   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.206917   30218 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1207 20:27:22.206930   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206938   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.206948   30218 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1207 20:27:22.206962   30218 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1207 20:27:22.206980   30218 command_runner.go:130] >       ],
	I1207 20:27:22.206991   30218 command_runner.go:130] >       "size": "295456551",
	I1207 20:27:22.206998   30218 command_runner.go:130] >       "uid": {
	I1207 20:27:22.207009   30218 command_runner.go:130] >         "value": "0"
	I1207 20:27:22.207018   30218 command_runner.go:130] >       },
	I1207 20:27:22.207028   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207038   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207048   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207057   30218 command_runner.go:130] >     },
	I1207 20:27:22.207066   30218 command_runner.go:130] >     {
	I1207 20:27:22.207080   30218 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1207 20:27:22.207090   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.207104   30218 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1207 20:27:22.207114   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207124   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.207139   30218 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1207 20:27:22.207153   30218 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1207 20:27:22.207162   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207173   30218 command_runner.go:130] >       "size": "127226832",
	I1207 20:27:22.207183   30218 command_runner.go:130] >       "uid": {
	I1207 20:27:22.207194   30218 command_runner.go:130] >         "value": "0"
	I1207 20:27:22.207200   30218 command_runner.go:130] >       },
	I1207 20:27:22.207211   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207221   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207231   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207240   30218 command_runner.go:130] >     },
	I1207 20:27:22.207249   30218 command_runner.go:130] >     {
	I1207 20:27:22.207261   30218 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1207 20:27:22.207268   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.207278   30218 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1207 20:27:22.207309   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207328   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.207340   30218 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1207 20:27:22.207354   30218 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1207 20:27:22.207364   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207371   30218 command_runner.go:130] >       "size": "123261750",
	I1207 20:27:22.207381   30218 command_runner.go:130] >       "uid": {
	I1207 20:27:22.207390   30218 command_runner.go:130] >         "value": "0"
	I1207 20:27:22.207400   30218 command_runner.go:130] >       },
	I1207 20:27:22.207409   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207417   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207427   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207436   30218 command_runner.go:130] >     },
	I1207 20:27:22.207443   30218 command_runner.go:130] >     {
	I1207 20:27:22.207457   30218 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1207 20:27:22.207467   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.207479   30218 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1207 20:27:22.207488   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207505   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.207517   30218 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1207 20:27:22.207533   30218 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1207 20:27:22.207543   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207550   30218 command_runner.go:130] >       "size": "74749335",
	I1207 20:27:22.207560   30218 command_runner.go:130] >       "uid": null,
	I1207 20:27:22.207570   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207577   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207592   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207601   30218 command_runner.go:130] >     },
	I1207 20:27:22.207610   30218 command_runner.go:130] >     {
	I1207 20:27:22.207620   30218 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1207 20:27:22.207630   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.207642   30218 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1207 20:27:22.207651   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207659   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.207699   30218 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1207 20:27:22.207714   30218 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1207 20:27:22.207725   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207735   30218 command_runner.go:130] >       "size": "61551410",
	I1207 20:27:22.207745   30218 command_runner.go:130] >       "uid": {
	I1207 20:27:22.207756   30218 command_runner.go:130] >         "value": "0"
	I1207 20:27:22.207762   30218 command_runner.go:130] >       },
	I1207 20:27:22.207773   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207783   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207793   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207802   30218 command_runner.go:130] >     },
	I1207 20:27:22.207811   30218 command_runner.go:130] >     {
	I1207 20:27:22.207824   30218 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1207 20:27:22.207831   30218 command_runner.go:130] >       "repoTags": [
	I1207 20:27:22.207838   30218 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1207 20:27:22.207848   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207859   30218 command_runner.go:130] >       "repoDigests": [
	I1207 20:27:22.207871   30218 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1207 20:27:22.207886   30218 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1207 20:27:22.207895   30218 command_runner.go:130] >       ],
	I1207 20:27:22.207908   30218 command_runner.go:130] >       "size": "750414",
	I1207 20:27:22.207918   30218 command_runner.go:130] >       "uid": {
	I1207 20:27:22.207927   30218 command_runner.go:130] >         "value": "65535"
	I1207 20:27:22.207934   30218 command_runner.go:130] >       },
	I1207 20:27:22.207941   30218 command_runner.go:130] >       "username": "",
	I1207 20:27:22.207951   30218 command_runner.go:130] >       "spec": null,
	I1207 20:27:22.207961   30218 command_runner.go:130] >       "pinned": false
	I1207 20:27:22.207968   30218 command_runner.go:130] >     }
	I1207 20:27:22.207977   30218 command_runner.go:130] >   ]
	I1207 20:27:22.207985   30218 command_runner.go:130] > }
	I1207 20:27:22.208136   30218 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 20:27:22.208147   30218 cache_images.go:84] Images are preloaded, skipping loading
	I1207 20:27:22.208223   30218 ssh_runner.go:195] Run: crio config
	I1207 20:27:22.255609   30218 command_runner.go:130] ! time="2023-12-07 20:27:22.236459381Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1207 20:27:22.255632   30218 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1207 20:27:22.263617   30218 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1207 20:27:22.263645   30218 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1207 20:27:22.263662   30218 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1207 20:27:22.263668   30218 command_runner.go:130] > #
	I1207 20:27:22.263679   30218 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1207 20:27:22.263689   30218 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1207 20:27:22.263701   30218 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1207 20:27:22.263715   30218 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1207 20:27:22.263725   30218 command_runner.go:130] > # reload'.
	I1207 20:27:22.263739   30218 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1207 20:27:22.263749   30218 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1207 20:27:22.263768   30218 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1207 20:27:22.263778   30218 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1207 20:27:22.263786   30218 command_runner.go:130] > [crio]
	I1207 20:27:22.263796   30218 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1207 20:27:22.263807   30218 command_runner.go:130] > # containers images, in this directory.
	I1207 20:27:22.263814   30218 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1207 20:27:22.263829   30218 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1207 20:27:22.263841   30218 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1207 20:27:22.263853   30218 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1207 20:27:22.263868   30218 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1207 20:27:22.263879   30218 command_runner.go:130] > storage_driver = "overlay"
	I1207 20:27:22.263890   30218 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1207 20:27:22.263899   30218 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1207 20:27:22.263905   30218 command_runner.go:130] > storage_option = [
	I1207 20:27:22.263910   30218 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1207 20:27:22.263915   30218 command_runner.go:130] > ]
	I1207 20:27:22.263922   30218 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1207 20:27:22.263930   30218 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1207 20:27:22.263935   30218 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1207 20:27:22.263943   30218 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1207 20:27:22.263949   30218 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1207 20:27:22.263955   30218 command_runner.go:130] > # always happen on a node reboot
	I1207 20:27:22.263960   30218 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1207 20:27:22.263969   30218 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1207 20:27:22.263976   30218 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1207 20:27:22.263989   30218 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1207 20:27:22.263999   30218 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1207 20:27:22.264011   30218 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1207 20:27:22.264021   30218 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1207 20:27:22.264027   30218 command_runner.go:130] > # internal_wipe = true
	I1207 20:27:22.264033   30218 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1207 20:27:22.264041   30218 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1207 20:27:22.264049   30218 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1207 20:27:22.264057   30218 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1207 20:27:22.264067   30218 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1207 20:27:22.264073   30218 command_runner.go:130] > [crio.api]
	I1207 20:27:22.264078   30218 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1207 20:27:22.264085   30218 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1207 20:27:22.264090   30218 command_runner.go:130] > # IP address on which the stream server will listen.
	I1207 20:27:22.264097   30218 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1207 20:27:22.264103   30218 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1207 20:27:22.264110   30218 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1207 20:27:22.264115   30218 command_runner.go:130] > # stream_port = "0"
	I1207 20:27:22.264122   30218 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1207 20:27:22.264129   30218 command_runner.go:130] > # stream_enable_tls = false
	I1207 20:27:22.264137   30218 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1207 20:27:22.264144   30218 command_runner.go:130] > # stream_idle_timeout = ""
	I1207 20:27:22.264150   30218 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1207 20:27:22.264158   30218 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1207 20:27:22.264164   30218 command_runner.go:130] > # minutes.
	I1207 20:27:22.264169   30218 command_runner.go:130] > # stream_tls_cert = ""
	I1207 20:27:22.264177   30218 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1207 20:27:22.264185   30218 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1207 20:27:22.264189   30218 command_runner.go:130] > # stream_tls_key = ""
	I1207 20:27:22.264196   30218 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1207 20:27:22.264205   30218 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1207 20:27:22.264213   30218 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1207 20:27:22.264217   30218 command_runner.go:130] > # stream_tls_ca = ""
	I1207 20:27:22.264226   30218 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:27:22.264233   30218 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1207 20:27:22.264240   30218 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:27:22.264246   30218 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1207 20:27:22.264267   30218 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1207 20:27:22.264278   30218 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1207 20:27:22.264282   30218 command_runner.go:130] > [crio.runtime]
	I1207 20:27:22.264288   30218 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1207 20:27:22.264293   30218 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1207 20:27:22.264300   30218 command_runner.go:130] > # "nofile=1024:2048"
	I1207 20:27:22.264305   30218 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1207 20:27:22.264311   30218 command_runner.go:130] > # default_ulimits = [
	I1207 20:27:22.264315   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264323   30218 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1207 20:27:22.264328   30218 command_runner.go:130] > # no_pivot = false
	I1207 20:27:22.264333   30218 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1207 20:27:22.264342   30218 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1207 20:27:22.264349   30218 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1207 20:27:22.264354   30218 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1207 20:27:22.264361   30218 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1207 20:27:22.264368   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:27:22.264374   30218 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1207 20:27:22.264379   30218 command_runner.go:130] > # Cgroup setting for conmon
	I1207 20:27:22.264389   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1207 20:27:22.264395   30218 command_runner.go:130] > conmon_cgroup = "pod"
	I1207 20:27:22.264402   30218 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1207 20:27:22.264409   30218 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1207 20:27:22.264418   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:27:22.264424   30218 command_runner.go:130] > conmon_env = [
	I1207 20:27:22.264430   30218 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1207 20:27:22.264436   30218 command_runner.go:130] > ]
	I1207 20:27:22.264442   30218 command_runner.go:130] > # Additional environment variables to set for all the
	I1207 20:27:22.264450   30218 command_runner.go:130] > # containers. These are overridden if set in the
	I1207 20:27:22.264457   30218 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1207 20:27:22.264470   30218 command_runner.go:130] > # default_env = [
	I1207 20:27:22.264473   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264480   30218 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1207 20:27:22.264489   30218 command_runner.go:130] > # selinux = false
	I1207 20:27:22.264501   30218 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1207 20:27:22.264514   30218 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1207 20:27:22.264526   30218 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1207 20:27:22.264540   30218 command_runner.go:130] > # seccomp_profile = ""
	I1207 20:27:22.264552   30218 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1207 20:27:22.264564   30218 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1207 20:27:22.264576   30218 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1207 20:27:22.264587   30218 command_runner.go:130] > # which might increase security.
	I1207 20:27:22.264595   30218 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1207 20:27:22.264602   30218 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1207 20:27:22.264611   30218 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1207 20:27:22.264619   30218 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1207 20:27:22.264625   30218 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1207 20:27:22.264633   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:27:22.264638   30218 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1207 20:27:22.264645   30218 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1207 20:27:22.264652   30218 command_runner.go:130] > # the cgroup blockio controller.
	I1207 20:27:22.264657   30218 command_runner.go:130] > # blockio_config_file = ""
	I1207 20:27:22.264665   30218 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1207 20:27:22.264671   30218 command_runner.go:130] > # irqbalance daemon.
	I1207 20:27:22.264676   30218 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1207 20:27:22.264687   30218 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1207 20:27:22.264694   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:27:22.264701   30218 command_runner.go:130] > # rdt_config_file = ""
	I1207 20:27:22.264706   30218 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1207 20:27:22.264711   30218 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1207 20:27:22.264717   30218 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1207 20:27:22.264723   30218 command_runner.go:130] > # separate_pull_cgroup = ""
	I1207 20:27:22.264729   30218 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1207 20:27:22.264741   30218 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1207 20:27:22.264747   30218 command_runner.go:130] > # will be added.
	I1207 20:27:22.264752   30218 command_runner.go:130] > # default_capabilities = [
	I1207 20:27:22.264758   30218 command_runner.go:130] > # 	"CHOWN",
	I1207 20:27:22.264762   30218 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1207 20:27:22.264768   30218 command_runner.go:130] > # 	"FSETID",
	I1207 20:27:22.264772   30218 command_runner.go:130] > # 	"FOWNER",
	I1207 20:27:22.264778   30218 command_runner.go:130] > # 	"SETGID",
	I1207 20:27:22.264781   30218 command_runner.go:130] > # 	"SETUID",
	I1207 20:27:22.264787   30218 command_runner.go:130] > # 	"SETPCAP",
	I1207 20:27:22.264796   30218 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1207 20:27:22.264802   30218 command_runner.go:130] > # 	"KILL",
	I1207 20:27:22.264806   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264815   30218 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1207 20:27:22.264822   30218 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:27:22.264829   30218 command_runner.go:130] > # default_sysctls = [
	I1207 20:27:22.264832   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264840   30218 command_runner.go:130] > # List of devices on the host that a
	I1207 20:27:22.264846   30218 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1207 20:27:22.264852   30218 command_runner.go:130] > # allowed_devices = [
	I1207 20:27:22.264856   30218 command_runner.go:130] > # 	"/dev/fuse",
	I1207 20:27:22.264861   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264866   30218 command_runner.go:130] > # List of additional devices. specified as
	I1207 20:27:22.264876   30218 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1207 20:27:22.264881   30218 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1207 20:27:22.264917   30218 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:27:22.264925   30218 command_runner.go:130] > # additional_devices = [
	I1207 20:27:22.264929   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264937   30218 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1207 20:27:22.264941   30218 command_runner.go:130] > # cdi_spec_dirs = [
	I1207 20:27:22.264947   30218 command_runner.go:130] > # 	"/etc/cdi",
	I1207 20:27:22.264951   30218 command_runner.go:130] > # 	"/var/run/cdi",
	I1207 20:27:22.264957   30218 command_runner.go:130] > # ]
	I1207 20:27:22.264963   30218 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1207 20:27:22.264971   30218 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1207 20:27:22.264977   30218 command_runner.go:130] > # Defaults to false.
	I1207 20:27:22.264982   30218 command_runner.go:130] > # device_ownership_from_security_context = false
	I1207 20:27:22.264990   30218 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1207 20:27:22.264998   30218 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1207 20:27:22.265002   30218 command_runner.go:130] > # hooks_dir = [
	I1207 20:27:22.265007   30218 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1207 20:27:22.265012   30218 command_runner.go:130] > # ]
	I1207 20:27:22.265019   30218 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1207 20:27:22.265027   30218 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1207 20:27:22.265035   30218 command_runner.go:130] > # its default mounts from the following two files:
	I1207 20:27:22.265041   30218 command_runner.go:130] > #
	I1207 20:27:22.265049   30218 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1207 20:27:22.265058   30218 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1207 20:27:22.265066   30218 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1207 20:27:22.265071   30218 command_runner.go:130] > #
	I1207 20:27:22.265077   30218 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1207 20:27:22.265085   30218 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1207 20:27:22.265093   30218 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1207 20:27:22.265100   30218 command_runner.go:130] > #      only add mounts it finds in this file.
	I1207 20:27:22.265104   30218 command_runner.go:130] > #
	I1207 20:27:22.265110   30218 command_runner.go:130] > # default_mounts_file = ""
	I1207 20:27:22.265116   30218 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1207 20:27:22.265125   30218 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1207 20:27:22.265131   30218 command_runner.go:130] > pids_limit = 1024
	I1207 20:27:22.265137   30218 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1207 20:27:22.265145   30218 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1207 20:27:22.265153   30218 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1207 20:27:22.265163   30218 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1207 20:27:22.265169   30218 command_runner.go:130] > # log_size_max = -1
	I1207 20:27:22.265178   30218 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1207 20:27:22.265184   30218 command_runner.go:130] > # log_to_journald = false
	I1207 20:27:22.265190   30218 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1207 20:27:22.265197   30218 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1207 20:27:22.265202   30218 command_runner.go:130] > # Path to directory for container attach sockets.
	I1207 20:27:22.265207   30218 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1207 20:27:22.265215   30218 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1207 20:27:22.265221   30218 command_runner.go:130] > # bind_mount_prefix = ""
	I1207 20:27:22.265227   30218 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1207 20:27:22.265233   30218 command_runner.go:130] > # read_only = false
	I1207 20:27:22.265239   30218 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1207 20:27:22.265251   30218 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1207 20:27:22.265257   30218 command_runner.go:130] > # live configuration reload.
	I1207 20:27:22.265262   30218 command_runner.go:130] > # log_level = "info"
	I1207 20:27:22.265269   30218 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1207 20:27:22.265276   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:27:22.265281   30218 command_runner.go:130] > # log_filter = ""
	I1207 20:27:22.265288   30218 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1207 20:27:22.265297   30218 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1207 20:27:22.265303   30218 command_runner.go:130] > # separated by comma.
	I1207 20:27:22.265307   30218 command_runner.go:130] > # uid_mappings = ""
	I1207 20:27:22.265315   30218 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1207 20:27:22.265323   30218 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1207 20:27:22.265329   30218 command_runner.go:130] > # separated by comma.
	I1207 20:27:22.265333   30218 command_runner.go:130] > # gid_mappings = ""
	I1207 20:27:22.265341   30218 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1207 20:27:22.265349   30218 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:27:22.265357   30218 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:27:22.265364   30218 command_runner.go:130] > # minimum_mappable_uid = -1
	I1207 20:27:22.265370   30218 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1207 20:27:22.265379   30218 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:27:22.265387   30218 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:27:22.265391   30218 command_runner.go:130] > # minimum_mappable_gid = -1
	I1207 20:27:22.265399   30218 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1207 20:27:22.265405   30218 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1207 20:27:22.265413   30218 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1207 20:27:22.265421   30218 command_runner.go:130] > # ctr_stop_timeout = 30
	I1207 20:27:22.265427   30218 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1207 20:27:22.265435   30218 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1207 20:27:22.265442   30218 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1207 20:27:22.265447   30218 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1207 20:27:22.265454   30218 command_runner.go:130] > drop_infra_ctr = false
	I1207 20:27:22.265467   30218 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1207 20:27:22.265475   30218 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1207 20:27:22.265486   30218 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1207 20:27:22.265496   30218 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1207 20:27:22.265506   30218 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1207 20:27:22.265517   30218 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1207 20:27:22.265527   30218 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1207 20:27:22.265541   30218 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1207 20:27:22.265551   30218 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1207 20:27:22.265564   30218 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1207 20:27:22.265577   30218 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1207 20:27:22.265590   30218 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1207 20:27:22.265604   30218 command_runner.go:130] > # default_runtime = "runc"
	I1207 20:27:22.265614   30218 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1207 20:27:22.265624   30218 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1207 20:27:22.265635   30218 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1207 20:27:22.265642   30218 command_runner.go:130] > # creation as a file is not desired either.
	I1207 20:27:22.265650   30218 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1207 20:27:22.265657   30218 command_runner.go:130] > # the hostname is being managed dynamically.
	I1207 20:27:22.265662   30218 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1207 20:27:22.265667   30218 command_runner.go:130] > # ]
	I1207 20:27:22.265675   30218 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1207 20:27:22.265684   30218 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1207 20:27:22.265696   30218 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1207 20:27:22.265704   30218 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1207 20:27:22.265710   30218 command_runner.go:130] > #
	I1207 20:27:22.265714   30218 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1207 20:27:22.265721   30218 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1207 20:27:22.265725   30218 command_runner.go:130] > #  runtime_type = "oci"
	I1207 20:27:22.265733   30218 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1207 20:27:22.265739   30218 command_runner.go:130] > #  privileged_without_host_devices = false
	I1207 20:27:22.265746   30218 command_runner.go:130] > #  allowed_annotations = []
	I1207 20:27:22.265750   30218 command_runner.go:130] > # Where:
	I1207 20:27:22.265758   30218 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1207 20:27:22.265763   30218 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1207 20:27:22.265772   30218 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1207 20:27:22.265780   30218 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1207 20:27:22.265784   30218 command_runner.go:130] > #   in $PATH.
	I1207 20:27:22.265792   30218 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1207 20:27:22.265798   30218 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1207 20:27:22.265805   30218 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1207 20:27:22.265811   30218 command_runner.go:130] > #   state.
	I1207 20:27:22.265817   30218 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1207 20:27:22.265825   30218 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1207 20:27:22.265833   30218 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1207 20:27:22.265841   30218 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1207 20:27:22.265847   30218 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1207 20:27:22.265857   30218 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1207 20:27:22.265867   30218 command_runner.go:130] > #   The currently recognized values are:
	I1207 20:27:22.265875   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1207 20:27:22.265885   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1207 20:27:22.265893   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1207 20:27:22.265899   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1207 20:27:22.265909   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1207 20:27:22.265918   30218 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1207 20:27:22.265936   30218 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1207 20:27:22.265951   30218 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1207 20:27:22.265961   30218 command_runner.go:130] > #   should be moved to the container's cgroup
	I1207 20:27:22.265965   30218 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1207 20:27:22.265971   30218 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1207 20:27:22.265976   30218 command_runner.go:130] > runtime_type = "oci"
	I1207 20:27:22.265986   30218 command_runner.go:130] > runtime_root = "/run/runc"
	I1207 20:27:22.265992   30218 command_runner.go:130] > runtime_config_path = ""
	I1207 20:27:22.265996   30218 command_runner.go:130] > monitor_path = ""
	I1207 20:27:22.266002   30218 command_runner.go:130] > monitor_cgroup = ""
	I1207 20:27:22.266007   30218 command_runner.go:130] > monitor_exec_cgroup = ""
	I1207 20:27:22.266018   30218 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1207 20:27:22.266024   30218 command_runner.go:130] > # running containers
	I1207 20:27:22.266028   30218 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1207 20:27:22.266036   30218 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1207 20:27:22.266086   30218 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1207 20:27:22.266101   30218 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1207 20:27:22.266106   30218 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1207 20:27:22.266111   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1207 20:27:22.266115   30218 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1207 20:27:22.266122   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1207 20:27:22.266127   30218 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1207 20:27:22.266133   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1207 20:27:22.266140   30218 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1207 20:27:22.266147   30218 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1207 20:27:22.266156   30218 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1207 20:27:22.266164   30218 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1207 20:27:22.266174   30218 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1207 20:27:22.266183   30218 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1207 20:27:22.266197   30218 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1207 20:27:22.266209   30218 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1207 20:27:22.266217   30218 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1207 20:27:22.266227   30218 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1207 20:27:22.266232   30218 command_runner.go:130] > # Example:
	I1207 20:27:22.266237   30218 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1207 20:27:22.266244   30218 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1207 20:27:22.266249   30218 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1207 20:27:22.266256   30218 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1207 20:27:22.266262   30218 command_runner.go:130] > # cpuset = 0
	I1207 20:27:22.266267   30218 command_runner.go:130] > # cpushares = "0-1"
	I1207 20:27:22.266273   30218 command_runner.go:130] > # Where:
	I1207 20:27:22.266277   30218 command_runner.go:130] > # The workload name is workload-type.
	I1207 20:27:22.266287   30218 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1207 20:27:22.266295   30218 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1207 20:27:22.266303   30218 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1207 20:27:22.266310   30218 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1207 20:27:22.266318   30218 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1207 20:27:22.266324   30218 command_runner.go:130] > # 
	I1207 20:27:22.266333   30218 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1207 20:27:22.266339   30218 command_runner.go:130] > #
	I1207 20:27:22.266344   30218 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1207 20:27:22.266352   30218 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1207 20:27:22.266361   30218 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1207 20:27:22.266367   30218 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1207 20:27:22.266375   30218 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1207 20:27:22.266379   30218 command_runner.go:130] > [crio.image]
	I1207 20:27:22.266385   30218 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1207 20:27:22.266392   30218 command_runner.go:130] > # default_transport = "docker://"
	I1207 20:27:22.266398   30218 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1207 20:27:22.266406   30218 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:27:22.266412   30218 command_runner.go:130] > # global_auth_file = ""
	I1207 20:27:22.266417   30218 command_runner.go:130] > # The image used to instantiate infra containers.
	I1207 20:27:22.266424   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:27:22.266432   30218 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1207 20:27:22.266440   30218 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1207 20:27:22.266448   30218 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:27:22.266453   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:27:22.266457   30218 command_runner.go:130] > # pause_image_auth_file = ""
	I1207 20:27:22.266467   30218 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1207 20:27:22.266472   30218 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1207 20:27:22.266480   30218 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1207 20:27:22.266490   30218 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1207 20:27:22.266496   30218 command_runner.go:130] > # pause_command = "/pause"
	I1207 20:27:22.266505   30218 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1207 20:27:22.266515   30218 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1207 20:27:22.266524   30218 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1207 20:27:22.266533   30218 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1207 20:27:22.266541   30218 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1207 20:27:22.266548   30218 command_runner.go:130] > # signature_policy = ""
	I1207 20:27:22.266556   30218 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1207 20:27:22.266566   30218 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1207 20:27:22.266571   30218 command_runner.go:130] > # changing them here.
	I1207 20:27:22.266577   30218 command_runner.go:130] > # insecure_registries = [
	I1207 20:27:22.266585   30218 command_runner.go:130] > # ]
	I1207 20:27:22.266599   30218 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1207 20:27:22.266610   30218 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1207 20:27:22.266617   30218 command_runner.go:130] > # image_volumes = "mkdir"
	I1207 20:27:22.266629   30218 command_runner.go:130] > # Temporary directory to use for storing big files
	I1207 20:27:22.266636   30218 command_runner.go:130] > # big_files_temporary_dir = ""
	I1207 20:27:22.266642   30218 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1207 20:27:22.266648   30218 command_runner.go:130] > # CNI plugins.
	I1207 20:27:22.266652   30218 command_runner.go:130] > [crio.network]
	I1207 20:27:22.266660   30218 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1207 20:27:22.266672   30218 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1207 20:27:22.266679   30218 command_runner.go:130] > # cni_default_network = ""
	I1207 20:27:22.266685   30218 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1207 20:27:22.266692   30218 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1207 20:27:22.266697   30218 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1207 20:27:22.266703   30218 command_runner.go:130] > # plugin_dirs = [
	I1207 20:27:22.266707   30218 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1207 20:27:22.266713   30218 command_runner.go:130] > # ]
	I1207 20:27:22.266720   30218 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1207 20:27:22.266726   30218 command_runner.go:130] > [crio.metrics]
	I1207 20:27:22.266733   30218 command_runner.go:130] > # Globally enable or disable metrics support.
	I1207 20:27:22.266740   30218 command_runner.go:130] > enable_metrics = true
	I1207 20:27:22.266744   30218 command_runner.go:130] > # Specify enabled metrics collectors.
	I1207 20:27:22.266751   30218 command_runner.go:130] > # Per default all metrics are enabled.
	I1207 20:27:22.266757   30218 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1207 20:27:22.266765   30218 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1207 20:27:22.266771   30218 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1207 20:27:22.266777   30218 command_runner.go:130] > # metrics_collectors = [
	I1207 20:27:22.266781   30218 command_runner.go:130] > # 	"operations",
	I1207 20:27:22.266788   30218 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1207 20:27:22.266793   30218 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1207 20:27:22.266799   30218 command_runner.go:130] > # 	"operations_errors",
	I1207 20:27:22.266803   30218 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1207 20:27:22.266809   30218 command_runner.go:130] > # 	"image_pulls_by_name",
	I1207 20:27:22.266814   30218 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1207 20:27:22.266820   30218 command_runner.go:130] > # 	"image_pulls_failures",
	I1207 20:27:22.266827   30218 command_runner.go:130] > # 	"image_pulls_successes",
	I1207 20:27:22.266833   30218 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1207 20:27:22.266837   30218 command_runner.go:130] > # 	"image_layer_reuse",
	I1207 20:27:22.266844   30218 command_runner.go:130] > # 	"containers_oom_total",
	I1207 20:27:22.266848   30218 command_runner.go:130] > # 	"containers_oom",
	I1207 20:27:22.266852   30218 command_runner.go:130] > # 	"processes_defunct",
	I1207 20:27:22.266859   30218 command_runner.go:130] > # 	"operations_total",
	I1207 20:27:22.266863   30218 command_runner.go:130] > # 	"operations_latency_seconds",
	I1207 20:27:22.266870   30218 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1207 20:27:22.266874   30218 command_runner.go:130] > # 	"operations_errors_total",
	I1207 20:27:22.266881   30218 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1207 20:27:22.266885   30218 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1207 20:27:22.266892   30218 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1207 20:27:22.266896   30218 command_runner.go:130] > # 	"image_pulls_success_total",
	I1207 20:27:22.266902   30218 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1207 20:27:22.266907   30218 command_runner.go:130] > # 	"containers_oom_count_total",
	I1207 20:27:22.266913   30218 command_runner.go:130] > # ]
	I1207 20:27:22.266918   30218 command_runner.go:130] > # The port on which the metrics server will listen.
	I1207 20:27:22.266925   30218 command_runner.go:130] > # metrics_port = 9090
	I1207 20:27:22.266931   30218 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1207 20:27:22.266937   30218 command_runner.go:130] > # metrics_socket = ""
	I1207 20:27:22.266944   30218 command_runner.go:130] > # The certificate for the secure metrics server.
	I1207 20:27:22.266950   30218 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1207 20:27:22.266958   30218 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1207 20:27:22.266965   30218 command_runner.go:130] > # certificate on any modification event.
	I1207 20:27:22.266969   30218 command_runner.go:130] > # metrics_cert = ""
	I1207 20:27:22.266977   30218 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1207 20:27:22.266982   30218 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1207 20:27:22.266987   30218 command_runner.go:130] > # metrics_key = ""
	I1207 20:27:22.266995   30218 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1207 20:27:22.267002   30218 command_runner.go:130] > [crio.tracing]
	I1207 20:27:22.267007   30218 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1207 20:27:22.267014   30218 command_runner.go:130] > # enable_tracing = false
	I1207 20:27:22.267019   30218 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1207 20:27:22.267026   30218 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1207 20:27:22.267031   30218 command_runner.go:130] > # Number of samples to collect per million spans.
	I1207 20:27:22.267039   30218 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1207 20:27:22.267048   30218 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1207 20:27:22.267054   30218 command_runner.go:130] > [crio.stats]
	I1207 20:27:22.267060   30218 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1207 20:27:22.267068   30218 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1207 20:27:22.267072   30218 command_runner.go:130] > # stats_collection_period = 0
	I1207 20:27:22.267149   30218 cni.go:84] Creating CNI manager for ""
	I1207 20:27:22.267159   30218 cni.go:136] 1 nodes found, recommending kindnet
	I1207 20:27:22.267176   30218 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:27:22.267194   30218 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-660958 NodeName:multinode-660958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:27:22.267333   30218 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-660958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:27:22.267398   30218 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-660958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:27:22.267444   30218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:27:22.278267   30218 command_runner.go:130] > kubeadm
	I1207 20:27:22.278288   30218 command_runner.go:130] > kubectl
	I1207 20:27:22.278292   30218 command_runner.go:130] > kubelet
	I1207 20:27:22.278309   30218 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:27:22.278356   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:27:22.291065   30218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1207 20:27:22.308013   30218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:27:22.324802   30218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1207 20:27:22.341338   30218 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I1207 20:27:22.345292   30218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:27:22.358432   30218 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958 for IP: 192.168.39.19
	I1207 20:27:22.358476   30218 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.358691   30218 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:27:22.358733   30218 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:27:22.358784   30218 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key
	I1207 20:27:22.358797   30218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt with IP's: []
	I1207 20:27:22.545598   30218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt ...
	I1207 20:27:22.545630   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt: {Name:mk0eb09f17d4f724e3d9990bd138146ebd38a116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.545795   30218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key ...
	I1207 20:27:22.545805   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key: {Name:mk74204ab0e343ce87c6c8391ec6e5a7b01f8328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.545873   30218 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key.8a6f02ba
	I1207 20:27:22.545886   30218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt.8a6f02ba with IP's: [192.168.39.19 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 20:27:22.652041   30218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt.8a6f02ba ...
	I1207 20:27:22.652070   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt.8a6f02ba: {Name:mk2f845a5dc98a8e8ca35d11a375dc62209c9975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.652221   30218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key.8a6f02ba ...
	I1207 20:27:22.652233   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key.8a6f02ba: {Name:mkb325312c70cf9d9862ee445588ed126b1d3cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.652291   30218 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt.8a6f02ba -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt
	I1207 20:27:22.652367   30218 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key.8a6f02ba -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key
	I1207 20:27:22.652424   30218 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key
	I1207 20:27:22.652437   30218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt with IP's: []
	I1207 20:27:22.769180   30218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt ...
	I1207 20:27:22.769206   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt: {Name:mkfda9e5e49298e8acb2cb3d670024365cc91523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.769345   30218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key ...
	I1207 20:27:22.769356   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key: {Name:mk73c732ab360d666650d2cecf433f0901e52905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:22.769418   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 20:27:22.769435   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 20:27:22.769452   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 20:27:22.769467   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 20:27:22.769479   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:27:22.769495   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:27:22.769507   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:27:22.769519   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:27:22.769561   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:27:22.769592   30218 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:27:22.769603   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:27:22.769629   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:27:22.769653   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:27:22.769674   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:27:22.769711   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:27:22.769733   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:27:22.769745   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:27:22.769756   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:27:22.770331   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:27:22.794541   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 20:27:22.814921   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:27:22.839285   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 20:27:22.863167   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:27:22.886593   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:27:22.910143   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:27:22.933167   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:27:22.956223   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:27:22.978538   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:27:23.001387   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:27:23.024654   30218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:27:23.040102   30218 ssh_runner.go:195] Run: openssl version
	I1207 20:27:23.045274   30218 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1207 20:27:23.045579   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:27:23.054800   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:27:23.059267   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:27:23.059444   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:27:23.059485   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:27:23.064690   30218 command_runner.go:130] > 51391683
	I1207 20:27:23.064756   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:27:23.074063   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:27:23.083518   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:27:23.088176   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:27:23.088206   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:27:23.088255   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:27:23.093726   30218 command_runner.go:130] > 3ec20f2e
	I1207 20:27:23.093797   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:27:23.103800   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:27:23.113707   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:27:23.118578   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:27:23.118641   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:27:23.118707   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:27:23.124489   30218 command_runner.go:130] > b5213941
	I1207 20:27:23.124574   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:27:23.135205   30218 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:27:23.139484   30218 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:27:23.139610   30218 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:27:23.139671   30218 kubeadm.go:404] StartCluster: {Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:27:23.139794   30218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 20:27:23.139852   30218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:27:23.177353   30218 cri.go:89] found id: ""
	I1207 20:27:23.177428   30218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:27:23.186051   30218 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1207 20:27:23.186078   30218 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1207 20:27:23.186087   30218 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1207 20:27:23.186375   30218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:27:23.194448   30218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:27:23.202783   30218 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1207 20:27:23.202811   30218 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1207 20:27:23.202824   30218 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1207 20:27:23.202835   30218 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:27:23.202864   30218 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:27:23.202907   30218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 20:27:23.302119   30218 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 20:27:23.302144   30218 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1207 20:27:23.302579   30218 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 20:27:23.302605   30218 command_runner.go:130] > [preflight] Running pre-flight checks
	I1207 20:27:23.549418   30218 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:27:23.549452   30218 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:27:23.549683   30218 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:27:23.549728   30218 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:27:23.549901   30218 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:27:23.549947   30218 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:27:23.792677   30218 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:27:23.792769   30218 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:27:23.964112   30218 out.go:204]   - Generating certificates and keys ...
	I1207 20:27:23.964194   30218 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1207 20:27:23.964214   30218 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 20:27:23.964335   30218 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 20:27:23.964361   30218 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1207 20:27:24.030518   30218 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:27:24.030544   30218 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:27:24.134325   30218 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:27:24.134349   30218 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:27:24.220745   30218 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 20:27:24.220774   30218 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1207 20:27:24.328059   30218 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 20:27:24.328088   30218 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1207 20:27:24.554546   30218 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 20:27:24.554573   30218 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1207 20:27:24.554866   30218 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-660958] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1207 20:27:24.554883   30218 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-660958] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1207 20:27:24.782772   30218 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 20:27:24.782809   30218 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1207 20:27:24.783088   30218 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-660958] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1207 20:27:24.783108   30218 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-660958] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1207 20:27:25.055571   30218 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:27:25.055599   30218 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:27:25.123907   30218 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:27:25.123936   30218 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:27:25.400365   30218 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 20:27:25.400392   30218 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1207 20:27:25.400606   30218 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:27:25.400643   30218 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:27:25.474924   30218 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:27:25.474964   30218 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:27:25.701505   30218 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:27:25.701536   30218 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:27:25.983021   30218 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:27:25.983063   30218 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:27:26.146044   30218 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:27:26.146084   30218 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:27:26.147003   30218 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:27:26.147021   30218 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:27:26.151587   30218 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:27:26.153695   30218 out.go:204]   - Booting up control plane ...
	I1207 20:27:26.151686   30218 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:27:26.153835   30218 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:27:26.153884   30218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:27:26.153996   30218 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:27:26.154026   30218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:27:26.154127   30218 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:27:26.154139   30218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:27:26.170175   30218 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:27:26.170201   30218 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:27:26.171199   30218 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:27:26.171220   30218 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:27:26.171298   30218 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 20:27:26.171311   30218 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1207 20:27:26.302358   30218 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:27:26.302390   30218 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:27:34.301690   30218 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004821 seconds
	I1207 20:27:34.301718   30218 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004821 seconds
	I1207 20:27:34.301875   30218 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:27:34.301892   30218 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:27:34.322911   30218 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:27:34.322937   30218 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:27:34.861736   30218 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:27:34.861761   30218 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:27:34.862014   30218 kubeadm.go:322] [mark-control-plane] Marking the node multinode-660958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 20:27:34.862037   30218 command_runner.go:130] > [mark-control-plane] Marking the node multinode-660958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 20:27:35.377150   30218 kubeadm.go:322] [bootstrap-token] Using token: f06x1s.qzdnq9phwmiubg4b
	I1207 20:27:35.377181   30218 command_runner.go:130] > [bootstrap-token] Using token: f06x1s.qzdnq9phwmiubg4b
	I1207 20:27:35.378728   30218 out.go:204]   - Configuring RBAC rules ...
	I1207 20:27:35.378919   30218 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:27:35.378934   30218 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:27:35.388089   30218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:27:35.388127   30218 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:27:35.399810   30218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:27:35.399830   30218 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:27:35.403177   30218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:27:35.403197   30218 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:27:35.408186   30218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:27:35.408208   30218 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:27:35.413999   30218 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:27:35.414014   30218 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:27:35.436962   30218 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:27:35.436990   30218 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:27:35.691095   30218 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 20:27:35.691119   30218 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1207 20:27:35.796448   30218 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 20:27:35.796475   30218 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1207 20:27:35.797451   30218 kubeadm.go:322] 
	I1207 20:27:35.797563   30218 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 20:27:35.797581   30218 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1207 20:27:35.797594   30218 kubeadm.go:322] 
	I1207 20:27:35.797703   30218 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 20:27:35.797721   30218 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1207 20:27:35.797732   30218 kubeadm.go:322] 
	I1207 20:27:35.797799   30218 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 20:27:35.797813   30218 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1207 20:27:35.797903   30218 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:27:35.797915   30218 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:27:35.797996   30218 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:27:35.798012   30218 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:27:35.798020   30218 kubeadm.go:322] 
	I1207 20:27:35.798084   30218 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 20:27:35.798091   30218 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1207 20:27:35.798095   30218 kubeadm.go:322] 
	I1207 20:27:35.798163   30218 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 20:27:35.798170   30218 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 20:27:35.798173   30218 kubeadm.go:322] 
	I1207 20:27:35.798214   30218 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 20:27:35.798221   30218 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1207 20:27:35.798294   30218 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:27:35.798301   30218 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:27:35.798375   30218 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:27:35.798384   30218 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:27:35.798387   30218 kubeadm.go:322] 
	I1207 20:27:35.798502   30218 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:27:35.798521   30218 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:27:35.798631   30218 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 20:27:35.798643   30218 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1207 20:27:35.798649   30218 kubeadm.go:322] 
	I1207 20:27:35.798764   30218 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f06x1s.qzdnq9phwmiubg4b \
	I1207 20:27:35.798775   30218 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token f06x1s.qzdnq9phwmiubg4b \
	I1207 20:27:35.798921   30218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 20:27:35.798930   30218 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 20:27:35.798946   30218 kubeadm.go:322] 	--control-plane 
	I1207 20:27:35.798950   30218 command_runner.go:130] > 	--control-plane 
	I1207 20:27:35.798953   30218 kubeadm.go:322] 
	I1207 20:27:35.799072   30218 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:27:35.799089   30218 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:27:35.799094   30218 kubeadm.go:322] 
	I1207 20:27:35.799189   30218 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f06x1s.qzdnq9phwmiubg4b \
	I1207 20:27:35.799199   30218 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token f06x1s.qzdnq9phwmiubg4b \
	I1207 20:27:35.799316   30218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:27:35.799327   30218 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:27:35.799953   30218 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:27:35.799989   30218 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:27:35.800027   30218 cni.go:84] Creating CNI manager for ""
	I1207 20:27:35.800040   30218 cni.go:136] 1 nodes found, recommending kindnet
	I1207 20:27:35.801907   30218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1207 20:27:35.803399   30218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 20:27:35.811669   30218 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1207 20:27:35.811710   30218 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1207 20:27:35.811720   30218 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1207 20:27:35.811730   30218 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:27:35.811743   30218 command_runner.go:130] > Access: 2023-12-07 20:27:02.626750912 +0000
	I1207 20:27:35.811751   30218 command_runner.go:130] > Modify: 2023-12-05 19:27:41.000000000 +0000
	I1207 20:27:35.811759   30218 command_runner.go:130] > Change: 2023-12-07 20:27:00.736750912 +0000
	I1207 20:27:35.811764   30218 command_runner.go:130] >  Birth: -
	I1207 20:27:35.811983   30218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 20:27:35.812002   30218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 20:27:35.882744   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 20:27:36.958598   30218 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1207 20:27:36.958624   30218 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1207 20:27:36.958631   30218 command_runner.go:130] > serviceaccount/kindnet created
	I1207 20:27:36.958638   30218 command_runner.go:130] > daemonset.apps/kindnet created
	I1207 20:27:36.958719   30218 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.07593758s)
	I1207 20:27:36.958775   30218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:27:36.958852   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:36.958855   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=multinode-660958 minikube.k8s.io/updated_at=2023_12_07T20_27_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:36.970905   30218 command_runner.go:130] > -16
	I1207 20:27:36.970939   30218 ops.go:34] apiserver oom_adj: -16
	I1207 20:27:37.124407   30218 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1207 20:27:37.126976   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:37.221506   30218 command_runner.go:130] > node/multinode-660958 labeled
	I1207 20:27:37.263631   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:37.265946   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:37.354140   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:37.856530   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:37.949190   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:38.356779   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:38.442099   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:38.856759   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:38.943497   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:39.355982   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:39.444465   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:39.856771   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:39.951887   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:40.356066   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:40.439503   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:40.856090   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:40.932984   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:41.356329   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:41.438177   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:41.856927   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:41.944532   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:42.356153   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:42.441779   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:42.856914   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:42.940781   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:43.356243   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:43.443254   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:43.855912   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:43.942843   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:44.356693   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:44.436753   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:44.856287   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:44.952161   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:45.356822   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:45.439198   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:45.856879   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:45.947414   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:46.355974   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:46.447419   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:46.856743   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:46.944678   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:47.356731   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:47.474316   30218 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1207 20:27:47.856583   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:27:48.019209   30218 command_runner.go:130] > NAME      SECRETS   AGE
	I1207 20:27:48.019234   30218 command_runner.go:130] > default   0         1s
	I1207 20:27:48.020755   30218 kubeadm.go:1088] duration metric: took 11.061958055s to wait for elevateKubeSystemPrivileges.
	I1207 20:27:48.020800   30218 kubeadm.go:406] StartCluster complete in 24.881141301s
	I1207 20:27:48.020827   30218 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:48.020919   30218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:27:48.021708   30218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:27:48.021959   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:27:48.022036   30218 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:27:48.022109   30218 addons.go:69] Setting storage-provisioner=true in profile "multinode-660958"
	I1207 20:27:48.022149   30218 addons.go:231] Setting addon storage-provisioner=true in "multinode-660958"
	I1207 20:27:48.022167   30218 addons.go:69] Setting default-storageclass=true in profile "multinode-660958"
	I1207 20:27:48.022206   30218 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:27:48.022208   30218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-660958"
	I1207 20:27:48.022650   30218 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:27:48.022772   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:48.022834   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:48.022845   30218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:27:48.022846   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:48.023004   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:48.023214   30218 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:27:48.023980   30218 cert_rotation.go:137] Starting client certificate rotation controller
	I1207 20:27:48.024217   30218 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:27:48.024233   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.024244   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.024253   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.037241   30218 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1207 20:27:48.037268   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.037278   30218 round_trippers.go:580]     Audit-Id: 561b4a8f-c1a7-4576-b42d-30e9f6abe689
	I1207 20:27:48.037286   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.037294   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.037305   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.037314   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.037321   30218 round_trippers.go:580]     Content-Length: 291
	I1207 20:27:48.037333   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.037372   30218 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"305","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1207 20:27:48.037984   30218 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"305","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1207 20:27:48.038058   30218 round_trippers.go:463] PUT https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:27:48.038076   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.038087   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.038099   30218 round_trippers.go:473]     Content-Type: application/json
	I1207 20:27:48.038110   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.043418   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I1207 20:27:48.043421   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I1207 20:27:48.043860   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:48.043895   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:48.044407   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:48.044427   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:48.044441   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:48.044463   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:48.044812   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:48.044893   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:48.045017   30218 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:27:48.045503   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:48.045577   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:48.047242   30218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:27:48.047572   30218 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:27:48.047946   30218 addons.go:231] Setting addon default-storageclass=true in "multinode-660958"
	I1207 20:27:48.047988   30218 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:27:48.048416   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:48.048462   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:48.057031   30218 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1207 20:27:48.057056   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.057067   30218 round_trippers.go:580]     Audit-Id: 833e650c-d70b-4848-b2ed-d94e257fe447
	I1207 20:27:48.057075   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.057084   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.057092   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.057104   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.057112   30218 round_trippers.go:580]     Content-Length: 291
	I1207 20:27:48.057119   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.057164   30218 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"328","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1207 20:27:48.057328   30218 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:27:48.057349   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.057362   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.057371   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.059775   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:48.059797   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.059807   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.059816   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.059830   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.059843   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.059852   30218 round_trippers.go:580]     Content-Length: 291
	I1207 20:27:48.059862   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.059897   30218 round_trippers.go:580]     Audit-Id: 6527b73e-64db-400c-bfd0-f190abb230e1
	I1207 20:27:48.059924   30218 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"328","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1207 20:27:48.060030   30218 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-660958" context rescaled to 1 replicas
	I1207 20:27:48.060070   30218 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:27:48.061910   30218 out.go:177] * Verifying Kubernetes components...
	I1207 20:27:48.063264   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:27:48.063295   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I1207 20:27:48.061310   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40957
	I1207 20:27:48.063710   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:48.063749   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:48.064226   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:48.064254   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:48.064353   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:48.064375   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:48.064565   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:48.064702   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:48.064752   30218 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:27:48.065250   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:48.065282   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:48.066400   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:48.068262   30218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:27:48.069950   30218 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:27:48.069988   30218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:27:48.070012   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:48.073385   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:48.073860   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:48.073889   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:48.074153   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:48.074337   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:48.074499   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:48.074656   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:48.082347   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I1207 20:27:48.082824   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:48.083284   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:48.083312   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:48.083597   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:48.083763   30218 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:27:48.085251   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:27:48.085474   30218 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:27:48.085489   30218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:27:48.085505   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:27:48.088156   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:48.088563   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:27:48.088597   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:27:48.088741   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:27:48.088910   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:27:48.089049   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:27:48.089205   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:27:48.210093   30218 command_runner.go:130] > apiVersion: v1
	I1207 20:27:48.210114   30218 command_runner.go:130] > data:
	I1207 20:27:48.210120   30218 command_runner.go:130] >   Corefile: |
	I1207 20:27:48.210127   30218 command_runner.go:130] >     .:53 {
	I1207 20:27:48.210132   30218 command_runner.go:130] >         errors
	I1207 20:27:48.210139   30218 command_runner.go:130] >         health {
	I1207 20:27:48.210145   30218 command_runner.go:130] >            lameduck 5s
	I1207 20:27:48.210151   30218 command_runner.go:130] >         }
	I1207 20:27:48.210156   30218 command_runner.go:130] >         ready
	I1207 20:27:48.210165   30218 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1207 20:27:48.210175   30218 command_runner.go:130] >            pods insecure
	I1207 20:27:48.210185   30218 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1207 20:27:48.210198   30218 command_runner.go:130] >            ttl 30
	I1207 20:27:48.210208   30218 command_runner.go:130] >         }
	I1207 20:27:48.210232   30218 command_runner.go:130] >         prometheus :9153
	I1207 20:27:48.210244   30218 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1207 20:27:48.210256   30218 command_runner.go:130] >            max_concurrent 1000
	I1207 20:27:48.210262   30218 command_runner.go:130] >         }
	I1207 20:27:48.210270   30218 command_runner.go:130] >         cache 30
	I1207 20:27:48.210280   30218 command_runner.go:130] >         loop
	I1207 20:27:48.210291   30218 command_runner.go:130] >         reload
	I1207 20:27:48.210301   30218 command_runner.go:130] >         loadbalance
	I1207 20:27:48.210310   30218 command_runner.go:130] >     }
	I1207 20:27:48.210318   30218 command_runner.go:130] > kind: ConfigMap
	I1207 20:27:48.210325   30218 command_runner.go:130] > metadata:
	I1207 20:27:48.210339   30218 command_runner.go:130] >   creationTimestamp: "2023-12-07T20:27:35Z"
	I1207 20:27:48.210348   30218 command_runner.go:130] >   name: coredns
	I1207 20:27:48.210356   30218 command_runner.go:130] >   namespace: kube-system
	I1207 20:27:48.210367   30218 command_runner.go:130] >   resourceVersion: "228"
	I1207 20:27:48.210376   30218 command_runner.go:130] >   uid: e7783337-00bf-41eb-a7bf-df63fd11f78e
	I1207 20:27:48.212025   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 20:27:48.212396   30218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:27:48.212649   30218 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:27:48.212933   30218 node_ready.go:35] waiting up to 6m0s for node "multinode-660958" to be "Ready" ...
	I1207 20:27:48.213024   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:48.213034   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.213041   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.213050   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.231787   30218 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1207 20:27:48.231806   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.231813   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.231818   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.231823   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.231828   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.231834   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.231838   30218 round_trippers.go:580]     Audit-Id: a95cde64-4ead-4ff3-9d38-4eff44bff374
	I1207 20:27:48.231941   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:48.232488   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:48.232502   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.232509   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.232515   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.250307   30218 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1207 20:27:48.250328   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.250335   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.250340   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.250345   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.250350   30218 round_trippers.go:580]     Audit-Id: 5a54c37c-b80d-41c0-b40d-0332c9e3784f
	I1207 20:27:48.250356   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.250360   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.251784   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:48.259626   30218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:27:48.265641   30218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:27:48.753222   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:48.753243   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:48.753251   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:48.753256   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:48.757277   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:27:48.757296   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:48.757305   30218 round_trippers.go:580]     Audit-Id: d7f5a2b2-01ea-4758-918b-2049198b0cc7
	I1207 20:27:48.757310   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:48.757316   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:48.757321   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:48.757328   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:48.757336   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:48 GMT
	I1207 20:27:48.757764   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:48.899175   30218 command_runner.go:130] > configmap/coredns replaced
	I1207 20:27:48.902284   30218 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1207 20:27:49.199904   30218 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1207 20:27:49.199931   30218 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1207 20:27:49.199939   30218 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1207 20:27:49.199946   30218 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1207 20:27:49.199951   30218 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1207 20:27:49.199955   30218 command_runner.go:130] > pod/storage-provisioner created
	I1207 20:27:49.199975   30218 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1207 20:27:49.200000   30218 main.go:141] libmachine: Making call to close driver server
	I1207 20:27:49.200018   30218 main.go:141] libmachine: (multinode-660958) Calling .Close
	I1207 20:27:49.200055   30218 main.go:141] libmachine: Making call to close driver server
	I1207 20:27:49.200070   30218 main.go:141] libmachine: (multinode-660958) Calling .Close
	I1207 20:27:49.200295   30218 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:27:49.200315   30218 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:27:49.200332   30218 main.go:141] libmachine: Making call to close driver server
	I1207 20:27:49.200341   30218 main.go:141] libmachine: (multinode-660958) Calling .Close
	I1207 20:27:49.200394   30218 main.go:141] libmachine: (multinode-660958) DBG | Closing plugin on server side
	I1207 20:27:49.200416   30218 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:27:49.200446   30218 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:27:49.200467   30218 main.go:141] libmachine: Making call to close driver server
	I1207 20:27:49.200486   30218 main.go:141] libmachine: (multinode-660958) Calling .Close
	I1207 20:27:49.200567   30218 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:27:49.200582   30218 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:27:49.200746   30218 round_trippers.go:463] GET https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses
	I1207 20:27:49.200760   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:49.200771   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:49.200780   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:49.200901   30218 main.go:141] libmachine: (multinode-660958) DBG | Closing plugin on server side
	I1207 20:27:49.200919   30218 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:27:49.200938   30218 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:27:49.203898   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:49.203916   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:49.203925   30218 round_trippers.go:580]     Content-Length: 1273
	I1207 20:27:49.203939   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:49 GMT
	I1207 20:27:49.203947   30218 round_trippers.go:580]     Audit-Id: 560f7672-75ae-4c21-a211-9d5af23fb12f
	I1207 20:27:49.203958   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:49.203979   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:49.203989   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:49.204002   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:49.204065   30218 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"366"},"items":[{"metadata":{"name":"standard","uid":"fee4b114-9fcb-4ff4-9bad-c3d2be4b0f4c","resourceVersion":"360","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1207 20:27:49.204444   30218 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fee4b114-9fcb-4ff4-9bad-c3d2be4b0f4c","resourceVersion":"360","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1207 20:27:49.204509   30218 round_trippers.go:463] PUT https://192.168.39.19:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1207 20:27:49.204522   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:49.204532   30218 round_trippers.go:473]     Content-Type: application/json
	I1207 20:27:49.204543   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:49.204555   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:49.207933   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:49.207951   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:49.207960   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:49.207968   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:49.207977   30218 round_trippers.go:580]     Content-Length: 1220
	I1207 20:27:49.207989   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:49 GMT
	I1207 20:27:49.208000   30218 round_trippers.go:580]     Audit-Id: fe9140ac-dd29-49b2-ab8f-4b059db79c82
	I1207 20:27:49.208012   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:49.208024   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:49.208070   30218 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"fee4b114-9fcb-4ff4-9bad-c3d2be4b0f4c","resourceVersion":"360","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1207 20:27:49.208199   30218 main.go:141] libmachine: Making call to close driver server
	I1207 20:27:49.208215   30218 main.go:141] libmachine: (multinode-660958) Calling .Close
	I1207 20:27:49.208502   30218 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:27:49.208551   30218 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:27:49.208562   30218 main.go:141] libmachine: (multinode-660958) DBG | Closing plugin on server side
	I1207 20:27:49.210649   30218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 20:27:49.212210   30218 addons.go:502] enable addons completed in 1.190174437s: enabled=[storage-provisioner default-storageclass]
	I1207 20:27:49.253098   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:49.253117   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:49.253128   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:49.253137   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:49.259537   30218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1207 20:27:49.259560   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:49.259570   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:49.259579   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:49 GMT
	I1207 20:27:49.259586   30218 round_trippers.go:580]     Audit-Id: 8d6904f9-84be-432b-ad6e-06bed663120f
	I1207 20:27:49.259592   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:49.259597   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:49.259602   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:49.259722   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:49.752229   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:49.752256   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:49.752264   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:49.752271   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:49.754925   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:49.754941   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:49.754947   30218 round_trippers.go:580]     Audit-Id: 9bd99d2d-f002-4900-b9b2-2f32c2592246
	I1207 20:27:49.754952   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:49.754959   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:49.754965   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:49.754972   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:49.754977   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:49 GMT
	I1207 20:27:49.755133   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:50.252813   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:50.252836   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:50.252844   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:50.252851   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:50.255389   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:50.255406   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:50.255413   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:50.255419   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:50.255447   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:50.255458   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:50.255465   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:50 GMT
	I1207 20:27:50.255475   30218 round_trippers.go:580]     Audit-Id: 7ed49b36-4a81-4b3c-8d3b-e06f674a8e24
	I1207 20:27:50.255744   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:50.256065   30218 node_ready.go:58] node "multinode-660958" has status "Ready":"False"
	I1207 20:27:50.752327   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:50.752357   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:50.752367   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:50.752391   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:50.755323   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:50.755340   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:50.755347   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:50.755353   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:50 GMT
	I1207 20:27:50.755358   30218 round_trippers.go:580]     Audit-Id: d0b24c2e-16ec-4f91-896d-85909aa801e0
	I1207 20:27:50.755369   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:50.755376   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:50.755391   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:50.755528   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:51.253151   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:51.253175   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:51.253186   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:51.253195   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:51.255869   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:51.255888   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:51.255897   30218 round_trippers.go:580]     Audit-Id: 1e4339f8-db61-4a68-8022-27a8becba918
	I1207 20:27:51.255905   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:51.255912   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:51.255920   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:51.255932   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:51.255942   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:51 GMT
	I1207 20:27:51.256191   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:51.752889   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:51.752937   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:51.752949   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:51.752957   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:51.755833   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:51.755852   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:51.755858   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:51.755863   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:51.755868   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:51.755879   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:51.755887   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:51 GMT
	I1207 20:27:51.755899   30218 round_trippers.go:580]     Audit-Id: 560257b7-5ac9-4d83-b346-48261adc3b1d
	I1207 20:27:51.756100   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:52.252633   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:52.252663   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:52.252677   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:52.252685   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:52.255405   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:52.255430   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:52.255438   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:52 GMT
	I1207 20:27:52.255443   30218 round_trippers.go:580]     Audit-Id: 3f5261ac-2813-4b4d-92c7-d2980e3832c3
	I1207 20:27:52.255448   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:52.255459   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:52.255469   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:52.255478   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:52.255672   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:52.752324   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:52.752349   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:52.752357   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:52.752363   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:52.755173   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:52.755196   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:52.755208   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:52.755216   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:52.755223   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:52.755237   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:52.755250   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:52 GMT
	I1207 20:27:52.755259   30218 round_trippers.go:580]     Audit-Id: ebd3f17e-1da5-4659-b4e5-4412eb16e6d0
	I1207 20:27:52.755400   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:52.755714   30218 node_ready.go:58] node "multinode-660958" has status "Ready":"False"
	I1207 20:27:53.252274   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:53.252315   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.252326   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.252334   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.254894   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:53.254917   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.254928   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.254937   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.254946   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.254956   30218 round_trippers.go:580]     Audit-Id: 5921b8c9-8262-48f7-b5d1-7b887e272fea
	I1207 20:27:53.254963   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.254969   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.255295   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"322","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1207 20:27:53.752984   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:53.753005   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.753019   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.753026   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.756843   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:53.756867   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.756876   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.756885   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.756890   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.756898   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.756904   30218 round_trippers.go:580]     Audit-Id: 350fadd6-e519-4099-8b0c-27d31005a51d
	I1207 20:27:53.756915   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.757396   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:53.757698   30218 node_ready.go:49] node "multinode-660958" has status "Ready":"True"
	I1207 20:27:53.757713   30218 node_ready.go:38] duration metric: took 5.544757663s waiting for node "multinode-660958" to be "Ready" ...
	I1207 20:27:53.757721   30218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:27:53.757789   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:27:53.757796   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.757803   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.757809   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.762241   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:27:53.762256   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.762262   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.762270   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.762278   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.762286   30218 round_trippers.go:580]     Audit-Id: a4c28647-6426-4bbf-8686-7b280dac7cea
	I1207 20:27:53.762301   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.762310   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.762982   30218 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"389"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53878 chars]
	I1207 20:27:53.765992   30218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:53.766056   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:53.766065   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.766072   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.766078   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.768804   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:53.768820   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.768828   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.768834   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.768841   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.768846   30218 round_trippers.go:580]     Audit-Id: 64b4c438-f338-4d22-b7f8-c013f68c6c44
	I1207 20:27:53.768853   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.768858   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.770994   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:53.771354   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:53.771365   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.771372   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.771377   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.773438   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:53.773457   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.773466   30218 round_trippers.go:580]     Audit-Id: 6d8baea4-8532-41a6-b034-111417963b4e
	I1207 20:27:53.773474   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.773482   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.773491   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.773499   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.773506   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.773693   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:53.774125   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:53.774141   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.774151   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.774157   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.776285   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:53.776302   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.776313   30218 round_trippers.go:580]     Audit-Id: 41f4342e-48ef-4c21-9de4-9b23177db9c7
	I1207 20:27:53.776321   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.776329   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.776337   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.776347   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.776359   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.776682   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:53.777039   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:53.777052   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:53.777062   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:53.777071   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:53.778954   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:53.778973   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:53.778982   30218 round_trippers.go:580]     Audit-Id: 5f606c98-4fd2-4a52-8b37-860c46e90758
	I1207 20:27:53.778991   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:53.779000   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:53.779009   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:53.779017   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:53.779025   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:53 GMT
	I1207 20:27:53.779288   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:54.280548   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:54.280574   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:54.280592   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:54.280602   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:54.283733   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:54.283754   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:54.283764   30218 round_trippers.go:580]     Audit-Id: ba00cd91-d674-44f5-9efe-5acfbc0ab5f7
	I1207 20:27:54.283772   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:54.283780   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:54.283789   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:54.283798   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:54.283807   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:54 GMT
	I1207 20:27:54.283984   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:54.284601   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:54.284620   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:54.284631   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:54.284640   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:54.288693   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:27:54.288707   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:54.288716   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:54.288724   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:54.288732   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:54.288743   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:54.288753   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:54 GMT
	I1207 20:27:54.288762   30218 round_trippers.go:580]     Audit-Id: 8ca97102-93d2-4b97-8f50-356692c45449
	I1207 20:27:54.289633   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:54.780369   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:54.780394   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:54.780402   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:54.780411   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:54.783141   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:54.783159   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:54.783173   30218 round_trippers.go:580]     Audit-Id: 420c9080-4747-4304-98d6-13cf6e5e1b01
	I1207 20:27:54.783184   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:54.783201   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:54.783214   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:54.783225   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:54.783235   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:54 GMT
	I1207 20:27:54.783378   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:54.783789   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:54.783808   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:54.783815   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:54.783821   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:54.786629   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:54.786642   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:54.786648   30218 round_trippers.go:580]     Audit-Id: 7435833f-7c01-42cc-945b-d1b9b420e031
	I1207 20:27:54.786653   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:54.786658   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:54.786666   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:54.786674   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:54.786682   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:54 GMT
	I1207 20:27:54.786852   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:55.280515   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:55.280536   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:55.280544   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:55.280557   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:55.282975   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:55.282999   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:55.283008   30218 round_trippers.go:580]     Audit-Id: 0d01c50e-e00c-42dd-af95-5106c6f9bc84
	I1207 20:27:55.283016   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:55.283028   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:55.283043   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:55.283051   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:55.283059   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:55 GMT
	I1207 20:27:55.283463   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:55.283999   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:55.284017   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:55.284024   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:55.284030   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:55.286398   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:55.286411   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:55.286417   30218 round_trippers.go:580]     Audit-Id: b99d3068-4d4a-42e0-b266-c9d0e8be4e82
	I1207 20:27:55.286422   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:55.286427   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:55.286432   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:55.286440   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:55.286457   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:55 GMT
	I1207 20:27:55.286632   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:55.780336   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:55.780361   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:55.780369   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:55.780375   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:55.784226   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:55.784249   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:55.784256   30218 round_trippers.go:580]     Audit-Id: 50aca6c4-cde3-42d7-a8d3-63690d89dee0
	I1207 20:27:55.784262   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:55.784267   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:55.784272   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:55.784277   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:55.784282   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:55 GMT
	I1207 20:27:55.785188   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"388","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1207 20:27:55.785593   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:55.785607   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:55.785614   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:55.785620   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:55.787596   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:55.787610   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:55.787616   30218 round_trippers.go:580]     Audit-Id: 35a5d286-72f6-4f2a-a5ca-77e1a594e692
	I1207 20:27:55.787622   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:55.787627   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:55.787632   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:55.787638   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:55.787646   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:55 GMT
	I1207 20:27:55.787758   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:55.788017   30218 pod_ready.go:102] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:27:56.280451   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:27:56.280473   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.280481   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.280487   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.283219   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.283237   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.283246   30218 round_trippers.go:580]     Audit-Id: d656e926-1838-45c6-9b91-f57b37907896
	I1207 20:27:56.283253   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.283260   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.283268   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.283277   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.283283   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.283520   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"403","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1207 20:27:56.283930   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.283942   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.283949   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.283955   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.286557   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.286571   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.286577   30218 round_trippers.go:580]     Audit-Id: c575f1a2-4ef7-4ed7-b8ae-7bac8399efe0
	I1207 20:27:56.286582   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.286596   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.286602   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.286611   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.286619   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.286736   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.286995   30218 pod_ready.go:92] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.287009   30218 pod_ready.go:81] duration metric: took 2.520998826s waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.287017   30218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.287064   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:27:56.287072   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.287078   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.287084   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.288808   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.288824   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.288833   30218 round_trippers.go:580]     Audit-Id: af0c0d3a-e966-4b7f-8d19-b0910d232168
	I1207 20:27:56.288839   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.288847   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.288852   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.288860   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.288865   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.289159   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"356","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1207 20:27:56.289458   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.289469   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.289476   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.289486   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.291352   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.291366   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.291371   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.291376   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.291381   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.291390   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.291398   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.291413   30218 round_trippers.go:580]     Audit-Id: 2affeb1d-e29f-472b-ab8b-09c2636fee49
	I1207 20:27:56.291554   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.291793   30218 pod_ready.go:92] pod "etcd-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.291805   30218 pod_ready.go:81] duration metric: took 4.783787ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.291817   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.291859   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:27:56.291865   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.291872   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.291878   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.293789   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.293813   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.293823   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.293836   30218 round_trippers.go:580]     Audit-Id: 87115ea4-908b-4ba4-bdc6-0c76366c1f68
	I1207 20:27:56.293844   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.293855   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.293869   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.293876   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.294113   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"280","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1207 20:27:56.294563   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.294589   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.294600   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.294613   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.296441   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.296462   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.296471   30218 round_trippers.go:580]     Audit-Id: 95a46af5-3edf-42cf-8fbe-0ed0f9d250b5
	I1207 20:27:56.296482   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.296493   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.296500   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.296517   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.296527   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.296757   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.297056   30218 pod_ready.go:92] pod "kube-apiserver-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.297072   30218 pod_ready.go:81] duration metric: took 5.242459ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.297083   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.297140   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:27:56.297151   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.297161   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.297173   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.299005   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.299021   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.299029   30218 round_trippers.go:580]     Audit-Id: da411719-509c-4e34-8605-116d7454f028
	I1207 20:27:56.299037   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.299046   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.299061   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.299070   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.299082   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.299321   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"359","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1207 20:27:56.299762   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.299779   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.299790   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.299799   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.301398   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.301412   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.301420   30218 round_trippers.go:580]     Audit-Id: 9eb382c4-8337-41ad-8dd8-38a48524bbe3
	I1207 20:27:56.301428   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.301437   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.301452   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.301461   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.301482   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.301615   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.301871   30218 pod_ready.go:92] pod "kube-controller-manager-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.301885   30218 pod_ready.go:81] duration metric: took 4.78791ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.301896   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.353167   30218 request.go:629] Waited for 51.21814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:27:56.353245   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:27:56.353254   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.353262   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.353269   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.355818   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.355838   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.355848   30218 round_trippers.go:580]     Audit-Id: 776dd834-d4f5-4585-8e70-58e61b16620d
	I1207 20:27:56.355856   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.355864   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.355876   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.355886   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.355902   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.356298   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"373","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:27:56.552992   30218 request.go:629] Waited for 196.299493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.553077   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.553084   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.553095   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.553120   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.555579   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.555610   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.555621   30218 round_trippers.go:580]     Audit-Id: 38a3c191-345f-4784-b284-5e20b069375b
	I1207 20:27:56.555630   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.555639   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.555650   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.555661   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.555674   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.555833   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.556152   30218 pod_ready.go:92] pod "kube-proxy-pfc45" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.556167   30218 pod_ready.go:81] duration metric: took 254.263369ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.556179   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.753590   30218 request.go:629] Waited for 197.352975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:27:56.753656   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:27:56.753661   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.753669   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.753678   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.756550   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.756572   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.756585   30218 round_trippers.go:580]     Audit-Id: c455d18e-77fc-41ca-9875-f685c8afd146
	I1207 20:27:56.756593   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.756600   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.756607   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.756616   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.756626   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.756755   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"279","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1207 20:27:56.953207   30218 request.go:629] Waited for 195.996768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.953270   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:27:56.953277   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.953287   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.953296   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.956110   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:56.956129   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.956136   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.956141   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.956146   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.956151   30218 round_trippers.go:580]     Audit-Id: 993f062c-56a4-490c-9295-2dec61353c25
	I1207 20:27:56.956156   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.956162   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.956310   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:27:56.956624   30218 pod_ready.go:92] pod "kube-scheduler-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:27:56.956639   30218 pod_ready.go:81] duration metric: took 400.451401ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:27:56.956653   30218 pod_ready.go:38] duration metric: took 3.198909807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:27:56.956695   30218 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:27:56.956749   30218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:27:56.969625   30218 command_runner.go:130] > 1064
	I1207 20:27:56.969660   30218 api_server.go:72] duration metric: took 8.909550242s to wait for apiserver process to appear ...
	I1207 20:27:56.969671   30218 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:27:56.969688   30218 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:27:56.974526   30218 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1207 20:27:56.974578   30218 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1207 20:27:56.974585   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:56.974593   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:56.974601   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:56.975703   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:27:56.975717   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:56.975725   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:56 GMT
	I1207 20:27:56.975731   30218 round_trippers.go:580]     Audit-Id: e371bf49-decd-462e-b3b3-3bd2bbcaadcc
	I1207 20:27:56.975736   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:56.975740   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:56.975745   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:56.975750   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:56.975755   30218 round_trippers.go:580]     Content-Length: 264
	I1207 20:27:56.975867   30218 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1207 20:27:56.975961   30218 api_server.go:141] control plane version: v1.28.4
	I1207 20:27:56.975980   30218 api_server.go:131] duration metric: took 6.30225ms to wait for apiserver health ...
	I1207 20:27:56.975992   30218 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:27:57.153396   30218 request.go:629] Waited for 177.336303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:27:57.153463   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:27:57.153469   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:57.153477   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:57.153486   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:57.157467   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:57.157490   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:57.157501   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:57.157508   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:57.157517   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:57 GMT
	I1207 20:27:57.157528   30218 round_trippers.go:580]     Audit-Id: 676d29ed-9cea-41ba-be66-8f65a322d791
	I1207 20:27:57.157536   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:57.157544   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:57.158554   30218 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"403","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1207 20:27:57.160283   30218 system_pods.go:59] 8 kube-system pods found
	I1207 20:27:57.160305   30218 system_pods.go:61] "coredns-5dd5756b68-7mss7" [6d6632ea-9aae-43e7-8b17-56399870082b] Running
	I1207 20:27:57.160310   30218 system_pods.go:61] "etcd-multinode-660958" [997363d1-ef51-46b9-98ad-276aa803f3a8] Running
	I1207 20:27:57.160314   30218 system_pods.go:61] "kindnet-jpfqs" [158552a2-294c-4d08-81de-05b1daf7dfe1] Running
	I1207 20:27:57.160318   30218 system_pods.go:61] "kube-apiserver-multinode-660958" [ab5b9260-db2a-4625-aff0-8b0fcf6a74a8] Running
	I1207 20:27:57.160324   30218 system_pods.go:61] "kube-controller-manager-multinode-660958" [fb58a1b4-61c1-41c6-b3af-824cc7a08c14] Running
	I1207 20:27:57.160327   30218 system_pods.go:61] "kube-proxy-pfc45" [1e39fc15-3b2e-418c-92f1-32570e3bd853] Running
	I1207 20:27:57.160332   30218 system_pods.go:61] "kube-scheduler-multinode-660958" [ff5eb685-6086-4a98-b3b9-a485746dcbd4] Running
	I1207 20:27:57.160336   30218 system_pods.go:61] "storage-provisioner" [48bcf9dc-632d-4f04-9f6a-04d31cef5d88] Running
	I1207 20:27:57.160343   30218 system_pods.go:74] duration metric: took 184.343412ms to wait for pod list to return data ...
	I1207 20:27:57.160351   30218 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:27:57.353819   30218 request.go:629] Waited for 193.39506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:27:57.353893   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:27:57.353904   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:57.353915   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:57.353941   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:57.357063   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:57.357083   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:57.357089   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:57.357104   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:57.357110   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:57.357116   30218 round_trippers.go:580]     Content-Length: 261
	I1207 20:27:57.357121   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:57 GMT
	I1207 20:27:57.357128   30218 round_trippers.go:580]     Audit-Id: d7f4a4df-c628-4ea6-a9c0-25fb3bc944f2
	I1207 20:27:57.357136   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:57.357159   30218 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e1c756f1-a1dc-42bb-91cf-feb818e20257","resourceVersion":"306","creationTimestamp":"2023-12-07T20:27:47Z"}}]}
	I1207 20:27:57.357376   30218 default_sa.go:45] found service account: "default"
	I1207 20:27:57.357396   30218 default_sa.go:55] duration metric: took 197.038747ms for default service account to be created ...
	I1207 20:27:57.357406   30218 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:27:57.553841   30218 request.go:629] Waited for 196.375101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:27:57.553892   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:27:57.553906   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:57.553914   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:57.553927   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:57.557265   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:27:57.557288   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:57.557298   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:57 GMT
	I1207 20:27:57.557315   30218 round_trippers.go:580]     Audit-Id: c6306a2e-083d-4d8a-98a0-374e8896cc43
	I1207 20:27:57.557324   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:57.557330   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:57.557335   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:57.557340   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:57.558385   30218 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"403","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1207 20:27:57.559984   30218 system_pods.go:86] 8 kube-system pods found
	I1207 20:27:57.560002   30218 system_pods.go:89] "coredns-5dd5756b68-7mss7" [6d6632ea-9aae-43e7-8b17-56399870082b] Running
	I1207 20:27:57.560007   30218 system_pods.go:89] "etcd-multinode-660958" [997363d1-ef51-46b9-98ad-276aa803f3a8] Running
	I1207 20:27:57.560011   30218 system_pods.go:89] "kindnet-jpfqs" [158552a2-294c-4d08-81de-05b1daf7dfe1] Running
	I1207 20:27:57.560015   30218 system_pods.go:89] "kube-apiserver-multinode-660958" [ab5b9260-db2a-4625-aff0-8b0fcf6a74a8] Running
	I1207 20:27:57.560021   30218 system_pods.go:89] "kube-controller-manager-multinode-660958" [fb58a1b4-61c1-41c6-b3af-824cc7a08c14] Running
	I1207 20:27:57.560024   30218 system_pods.go:89] "kube-proxy-pfc45" [1e39fc15-3b2e-418c-92f1-32570e3bd853] Running
	I1207 20:27:57.560028   30218 system_pods.go:89] "kube-scheduler-multinode-660958" [ff5eb685-6086-4a98-b3b9-a485746dcbd4] Running
	I1207 20:27:57.560032   30218 system_pods.go:89] "storage-provisioner" [48bcf9dc-632d-4f04-9f6a-04d31cef5d88] Running
	I1207 20:27:57.560039   30218 system_pods.go:126] duration metric: took 202.628ms to wait for k8s-apps to be running ...
	I1207 20:27:57.560048   30218 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:27:57.560086   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:27:57.572486   30218 system_svc.go:56] duration metric: took 12.430841ms WaitForService to wait for kubelet.
	I1207 20:27:57.572508   30218 kubeadm.go:581] duration metric: took 9.512398043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:27:57.572527   30218 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:27:57.753966   30218 request.go:629] Waited for 181.356416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1207 20:27:57.754021   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:27:57.754026   30218 round_trippers.go:469] Request Headers:
	I1207 20:27:57.754034   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:27:57.754045   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:27:57.756651   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:27:57.756668   30218 round_trippers.go:577] Response Headers:
	I1207 20:27:57.756675   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:27:57.756681   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:27:57.756686   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:27:57 GMT
	I1207 20:27:57.756694   30218 round_trippers.go:580]     Audit-Id: 56443410-1ba2-4618-9633-6fedeb0c5dc5
	I1207 20:27:57.756708   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:27:57.756717   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:27:57.757310   30218 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1207 20:27:57.757653   30218 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:27:57.757672   30218 node_conditions.go:123] node cpu capacity is 2
	I1207 20:27:57.757682   30218 node_conditions.go:105] duration metric: took 185.151185ms to run NodePressure ...
	I1207 20:27:57.757693   30218 start.go:228] waiting for startup goroutines ...
	I1207 20:27:57.757705   30218 start.go:233] waiting for cluster config update ...
	I1207 20:27:57.757713   30218 start.go:242] writing updated cluster config ...
	I1207 20:27:57.760140   30218 out.go:177] 
	I1207 20:27:57.762143   30218 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:27:57.762215   30218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:27:57.763931   30218 out.go:177] * Starting worker node multinode-660958-m02 in cluster multinode-660958
	I1207 20:27:57.765338   30218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:27:57.765355   30218 cache.go:56] Caching tarball of preloaded images
	I1207 20:27:57.765430   30218 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:27:57.765441   30218 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:27:57.765497   30218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:27:57.765638   30218 start.go:365] acquiring machines lock for multinode-660958-m02: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:27:57.765673   30218 start.go:369] acquired machines lock for "multinode-660958-m02" in 19.08µs
	I1207 20:27:57.765688   30218 start.go:93] Provisioning new machine with config: &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:27:57.765747   30218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1207 20:27:57.767678   30218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 20:27:57.767768   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:27:57.767809   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:27:57.781499   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I1207 20:27:57.781883   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:27:57.782346   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:27:57.782363   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:27:57.782658   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:27:57.782808   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:27:57.782916   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:27:57.783071   30218 start.go:159] libmachine.API.Create for "multinode-660958" (driver="kvm2")
	I1207 20:27:57.783095   30218 client.go:168] LocalClient.Create starting
	I1207 20:27:57.783124   30218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 20:27:57.783162   30218 main.go:141] libmachine: Decoding PEM data...
	I1207 20:27:57.783184   30218 main.go:141] libmachine: Parsing certificate...
	I1207 20:27:57.783246   30218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 20:27:57.783271   30218 main.go:141] libmachine: Decoding PEM data...
	I1207 20:27:57.783283   30218 main.go:141] libmachine: Parsing certificate...
	I1207 20:27:57.783332   30218 main.go:141] libmachine: Running pre-create checks...
	I1207 20:27:57.783347   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .PreCreateCheck
	I1207 20:27:57.783504   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetConfigRaw
	I1207 20:27:57.783887   30218 main.go:141] libmachine: Creating machine...
	I1207 20:27:57.783903   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .Create
	I1207 20:27:57.784048   30218 main.go:141] libmachine: (multinode-660958-m02) Creating KVM machine...
	I1207 20:27:57.785122   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found existing default KVM network
	I1207 20:27:57.785214   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found existing private KVM network mk-multinode-660958
	I1207 20:27:57.785311   30218 main.go:141] libmachine: (multinode-660958-m02) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02 ...
	I1207 20:27:57.785351   30218 main.go:141] libmachine: (multinode-660958-m02) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 20:27:57.785396   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:57.785308   30603 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:27:57.785463   30218 main.go:141] libmachine: (multinode-660958-m02) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 20:27:57.996230   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:57.996133   30603 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa...
	I1207 20:27:58.451209   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:58.451069   30603 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/multinode-660958-m02.rawdisk...
	I1207 20:27:58.451237   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Writing magic tar header
	I1207 20:27:58.451254   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Writing SSH key tar header
	I1207 20:27:58.451270   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:58.451185   30603 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02 ...
	I1207 20:27:58.451294   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02
	I1207 20:27:58.451360   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02 (perms=drwx------)
	I1207 20:27:58.451393   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 20:27:58.451409   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 20:27:58.451427   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:27:58.451459   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 20:27:58.451474   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 20:27:58.451487   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 20:27:58.451504   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 20:27:58.451528   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home/jenkins
	I1207 20:27:58.451570   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Checking permissions on dir: /home
	I1207 20:27:58.451594   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Skipping /home - not owner
	I1207 20:27:58.451617   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 20:27:58.451631   30218 main.go:141] libmachine: (multinode-660958-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 20:27:58.451644   30218 main.go:141] libmachine: (multinode-660958-m02) Creating domain...
	I1207 20:27:58.452415   30218 main.go:141] libmachine: (multinode-660958-m02) define libvirt domain using xml: 
	I1207 20:27:58.452436   30218 main.go:141] libmachine: (multinode-660958-m02) <domain type='kvm'>
	I1207 20:27:58.452444   30218 main.go:141] libmachine: (multinode-660958-m02)   <name>multinode-660958-m02</name>
	I1207 20:27:58.452450   30218 main.go:141] libmachine: (multinode-660958-m02)   <memory unit='MiB'>2200</memory>
	I1207 20:27:58.452457   30218 main.go:141] libmachine: (multinode-660958-m02)   <vcpu>2</vcpu>
	I1207 20:27:58.452462   30218 main.go:141] libmachine: (multinode-660958-m02)   <features>
	I1207 20:27:58.452476   30218 main.go:141] libmachine: (multinode-660958-m02)     <acpi/>
	I1207 20:27:58.452488   30218 main.go:141] libmachine: (multinode-660958-m02)     <apic/>
	I1207 20:27:58.452499   30218 main.go:141] libmachine: (multinode-660958-m02)     <pae/>
	I1207 20:27:58.452511   30218 main.go:141] libmachine: (multinode-660958-m02)     
	I1207 20:27:58.452541   30218 main.go:141] libmachine: (multinode-660958-m02)   </features>
	I1207 20:27:58.452569   30218 main.go:141] libmachine: (multinode-660958-m02)   <cpu mode='host-passthrough'>
	I1207 20:27:58.452584   30218 main.go:141] libmachine: (multinode-660958-m02)   
	I1207 20:27:58.452594   30218 main.go:141] libmachine: (multinode-660958-m02)   </cpu>
	I1207 20:27:58.452611   30218 main.go:141] libmachine: (multinode-660958-m02)   <os>
	I1207 20:27:58.452622   30218 main.go:141] libmachine: (multinode-660958-m02)     <type>hvm</type>
	I1207 20:27:58.452631   30218 main.go:141] libmachine: (multinode-660958-m02)     <boot dev='cdrom'/>
	I1207 20:27:58.452636   30218 main.go:141] libmachine: (multinode-660958-m02)     <boot dev='hd'/>
	I1207 20:27:58.452649   30218 main.go:141] libmachine: (multinode-660958-m02)     <bootmenu enable='no'/>
	I1207 20:27:58.452661   30218 main.go:141] libmachine: (multinode-660958-m02)   </os>
	I1207 20:27:58.452674   30218 main.go:141] libmachine: (multinode-660958-m02)   <devices>
	I1207 20:27:58.452688   30218 main.go:141] libmachine: (multinode-660958-m02)     <disk type='file' device='cdrom'>
	I1207 20:27:58.452725   30218 main.go:141] libmachine: (multinode-660958-m02)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/boot2docker.iso'/>
	I1207 20:27:58.452750   30218 main.go:141] libmachine: (multinode-660958-m02)       <target dev='hdc' bus='scsi'/>
	I1207 20:27:58.452765   30218 main.go:141] libmachine: (multinode-660958-m02)       <readonly/>
	I1207 20:27:58.452777   30218 main.go:141] libmachine: (multinode-660958-m02)     </disk>
	I1207 20:27:58.452794   30218 main.go:141] libmachine: (multinode-660958-m02)     <disk type='file' device='disk'>
	I1207 20:27:58.452809   30218 main.go:141] libmachine: (multinode-660958-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 20:27:58.452831   30218 main.go:141] libmachine: (multinode-660958-m02)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/multinode-660958-m02.rawdisk'/>
	I1207 20:27:58.452850   30218 main.go:141] libmachine: (multinode-660958-m02)       <target dev='hda' bus='virtio'/>
	I1207 20:27:58.452871   30218 main.go:141] libmachine: (multinode-660958-m02)     </disk>
	I1207 20:27:58.452893   30218 main.go:141] libmachine: (multinode-660958-m02)     <interface type='network'>
	I1207 20:27:58.452908   30218 main.go:141] libmachine: (multinode-660958-m02)       <source network='mk-multinode-660958'/>
	I1207 20:27:58.452916   30218 main.go:141] libmachine: (multinode-660958-m02)       <model type='virtio'/>
	I1207 20:27:58.452922   30218 main.go:141] libmachine: (multinode-660958-m02)     </interface>
	I1207 20:27:58.452931   30218 main.go:141] libmachine: (multinode-660958-m02)     <interface type='network'>
	I1207 20:27:58.452938   30218 main.go:141] libmachine: (multinode-660958-m02)       <source network='default'/>
	I1207 20:27:58.452945   30218 main.go:141] libmachine: (multinode-660958-m02)       <model type='virtio'/>
	I1207 20:27:58.452952   30218 main.go:141] libmachine: (multinode-660958-m02)     </interface>
	I1207 20:27:58.452959   30218 main.go:141] libmachine: (multinode-660958-m02)     <serial type='pty'>
	I1207 20:27:58.452973   30218 main.go:141] libmachine: (multinode-660958-m02)       <target port='0'/>
	I1207 20:27:58.452991   30218 main.go:141] libmachine: (multinode-660958-m02)     </serial>
	I1207 20:27:58.453005   30218 main.go:141] libmachine: (multinode-660958-m02)     <console type='pty'>
	I1207 20:27:58.453018   30218 main.go:141] libmachine: (multinode-660958-m02)       <target type='serial' port='0'/>
	I1207 20:27:58.453030   30218 main.go:141] libmachine: (multinode-660958-m02)     </console>
	I1207 20:27:58.453041   30218 main.go:141] libmachine: (multinode-660958-m02)     <rng model='virtio'>
	I1207 20:27:58.453055   30218 main.go:141] libmachine: (multinode-660958-m02)       <backend model='random'>/dev/random</backend>
	I1207 20:27:58.453067   30218 main.go:141] libmachine: (multinode-660958-m02)     </rng>
	I1207 20:27:58.453079   30218 main.go:141] libmachine: (multinode-660958-m02)     
	I1207 20:27:58.453091   30218 main.go:141] libmachine: (multinode-660958-m02)     
	I1207 20:27:58.453112   30218 main.go:141] libmachine: (multinode-660958-m02)   </devices>
	I1207 20:27:58.453123   30218 main.go:141] libmachine: (multinode-660958-m02) </domain>
	I1207 20:27:58.453136   30218 main.go:141] libmachine: (multinode-660958-m02) 
	I1207 20:27:58.459871   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:f2:95:db in network default
	I1207 20:27:58.460437   30218 main.go:141] libmachine: (multinode-660958-m02) Ensuring networks are active...
	I1207 20:27:58.460462   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:27:58.461129   30218 main.go:141] libmachine: (multinode-660958-m02) Ensuring network default is active
	I1207 20:27:58.461500   30218 main.go:141] libmachine: (multinode-660958-m02) Ensuring network mk-multinode-660958 is active
	I1207 20:27:58.461877   30218 main.go:141] libmachine: (multinode-660958-m02) Getting domain xml...
	I1207 20:27:58.462537   30218 main.go:141] libmachine: (multinode-660958-m02) Creating domain...
	I1207 20:27:59.666981   30218 main.go:141] libmachine: (multinode-660958-m02) Waiting to get IP...
	I1207 20:27:59.667773   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:27:59.668205   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:27:59.668233   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:59.668168   30603 retry.go:31] will retry after 296.594817ms: waiting for machine to come up
	I1207 20:27:59.966677   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:27:59.967161   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:27:59.967189   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:27:59.967117   30603 retry.go:31] will retry after 342.936756ms: waiting for machine to come up
	I1207 20:28:00.311734   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:00.312208   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:00.312228   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:00.312157   30603 retry.go:31] will retry after 459.333474ms: waiting for machine to come up
	I1207 20:28:00.773672   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:00.774173   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:00.774202   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:00.774128   30603 retry.go:31] will retry after 427.847906ms: waiting for machine to come up
	I1207 20:28:01.203682   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:01.204127   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:01.204156   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:01.204067   30603 retry.go:31] will retry after 589.380448ms: waiting for machine to come up
	I1207 20:28:01.794664   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:01.795123   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:01.795146   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:01.795062   30603 retry.go:31] will retry after 745.118382ms: waiting for machine to come up
	I1207 20:28:02.541309   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:02.541859   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:02.541889   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:02.541818   30603 retry.go:31] will retry after 923.611986ms: waiting for machine to come up
	I1207 20:28:03.467176   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:03.467619   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:03.467652   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:03.467591   30603 retry.go:31] will retry after 1.297057327s: waiting for machine to come up
	I1207 20:28:04.767117   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:04.767541   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:04.767570   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:04.767487   30603 retry.go:31] will retry after 1.577029715s: waiting for machine to come up
	I1207 20:28:06.346900   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:06.347294   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:06.347317   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:06.347252   30603 retry.go:31] will retry after 1.718697746s: waiting for machine to come up
	I1207 20:28:08.067048   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:08.067490   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:08.067517   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:08.067455   30603 retry.go:31] will retry after 1.847415599s: waiting for machine to come up
	I1207 20:28:09.917386   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:09.917880   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:09.917910   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:09.917821   30603 retry.go:31] will retry after 3.062072196s: waiting for machine to come up
	I1207 20:28:12.981266   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:12.981688   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:12.981718   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:12.981625   30603 retry.go:31] will retry after 3.355273752s: waiting for machine to come up
	I1207 20:28:16.339622   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:16.340161   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find current IP address of domain multinode-660958-m02 in network mk-multinode-660958
	I1207 20:28:16.340197   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | I1207 20:28:16.340085   30603 retry.go:31] will retry after 3.961127143s: waiting for machine to come up
	I1207 20:28:20.302308   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.302802   30218 main.go:141] libmachine: (multinode-660958-m02) Found IP for machine: 192.168.39.69
	I1207 20:28:20.302827   30218 main.go:141] libmachine: (multinode-660958-m02) Reserving static IP address...
	I1207 20:28:20.302843   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has current primary IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.303248   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | unable to find host DHCP lease matching {name: "multinode-660958-m02", mac: "52:54:00:ec:1e:84", ip: "192.168.39.69"} in network mk-multinode-660958
	I1207 20:28:20.374591   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Getting to WaitForSSH function...
	I1207 20:28:20.374632   30218 main.go:141] libmachine: (multinode-660958-m02) Reserved static IP address: 192.168.39.69
	I1207 20:28:20.374652   30218 main.go:141] libmachine: (multinode-660958-m02) Waiting for SSH to be available...
	I1207 20:28:20.377409   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.377936   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.377979   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.378154   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Using SSH client type: external
	I1207 20:28:20.378176   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa (-rw-------)
	I1207 20:28:20.378209   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:28:20.378224   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | About to run SSH command:
	I1207 20:28:20.378257   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | exit 0
	I1207 20:28:20.465460   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | SSH cmd err, output: <nil>: 
	I1207 20:28:20.465728   30218 main.go:141] libmachine: (multinode-660958-m02) KVM machine creation complete!
	I1207 20:28:20.466006   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetConfigRaw
	I1207 20:28:20.466579   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:20.466782   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:20.466954   30218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 20:28:20.466972   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetState
	I1207 20:28:20.468198   30218 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 20:28:20.468212   30218 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 20:28:20.468218   30218 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 20:28:20.468227   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:20.470860   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.471281   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.471317   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.471457   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:20.471659   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.471802   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.471953   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:20.472112   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:20.472557   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:20.472575   30218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 20:28:20.581065   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:28:20.581089   30218 main.go:141] libmachine: Detecting the provisioner...
	I1207 20:28:20.581097   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:20.583866   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.584200   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.584229   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.584387   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:20.584595   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.584766   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.584909   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:20.585055   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:20.585396   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:20.585411   30218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 20:28:20.698712   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 20:28:20.698801   30218 main.go:141] libmachine: found compatible host: buildroot
	I1207 20:28:20.698811   30218 main.go:141] libmachine: Provisioning with buildroot...
	I1207 20:28:20.698819   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:28:20.699062   30218 buildroot.go:166] provisioning hostname "multinode-660958-m02"
	I1207 20:28:20.699096   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:28:20.699280   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:20.702018   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.702335   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.702357   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.702500   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:20.702675   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.702805   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.702937   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:20.703072   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:20.703370   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:20.703388   30218 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958-m02 && echo "multinode-660958-m02" | sudo tee /etc/hostname
	I1207 20:28:20.826934   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-660958-m02
	
	I1207 20:28:20.826965   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:20.830054   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.830442   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.830470   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.830618   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:20.830815   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.831015   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:20.831176   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:20.831320   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:20.831651   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:20.831676   30218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-660958-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-660958-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-660958-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:28:20.955890   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:28:20.955924   30218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:28:20.955941   30218 buildroot.go:174] setting up certificates
	I1207 20:28:20.955954   30218 provision.go:83] configureAuth start
	I1207 20:28:20.955965   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:28:20.956256   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:28:20.958921   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.959255   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.959287   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.959454   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:20.961828   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.962232   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:20.962268   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:20.962352   30218 provision.go:138] copyHostCerts
	I1207 20:28:20.962388   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:28:20.962463   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:28:20.962479   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:28:20.962574   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:28:20.962739   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:28:20.962771   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:28:20.962780   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:28:20.962824   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:28:20.962915   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:28:20.962944   30218 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:28:20.962950   30218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:28:20.962994   30218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:28:20.963073   30218 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.multinode-660958-m02 san=[192.168.39.69 192.168.39.69 localhost 127.0.0.1 minikube multinode-660958-m02]
	I1207 20:28:21.131843   30218 provision.go:172] copyRemoteCerts
	I1207 20:28:21.131907   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:28:21.131936   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:21.134383   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.134673   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.134707   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.134892   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.135063   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.135196   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.135300   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:28:21.219323   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:28:21.219392   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1207 20:28:21.242422   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:28:21.242478   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 20:28:21.265755   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:28:21.265829   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:28:21.288931   30218 provision.go:86] duration metric: configureAuth took 332.964911ms
	I1207 20:28:21.288959   30218 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:28:21.289171   30218 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:28:21.289255   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:21.291602   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.291915   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.291948   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.292131   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.292346   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.292498   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.292641   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.292783   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:21.293098   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:21.293119   30218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:28:21.596824   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:28:21.596850   30218 main.go:141] libmachine: Checking connection to Docker...
	I1207 20:28:21.596861   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetURL
	I1207 20:28:21.598155   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | Using libvirt version 6000000
	I1207 20:28:21.600166   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.600523   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.600567   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.600731   30218 main.go:141] libmachine: Docker is up and running!
	I1207 20:28:21.600751   30218 main.go:141] libmachine: Reticulating splines...
	I1207 20:28:21.600760   30218 client.go:171] LocalClient.Create took 23.817657891s
	I1207 20:28:21.600782   30218 start.go:167] duration metric: libmachine.API.Create for "multinode-660958" took 23.817712777s
	I1207 20:28:21.600790   30218 start.go:300] post-start starting for "multinode-660958-m02" (driver="kvm2")
	I1207 20:28:21.600798   30218 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:28:21.600814   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:21.601058   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:28:21.601085   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:21.603277   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.603589   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.603609   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.603762   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.603931   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.604082   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.604210   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:28:21.687953   30218 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:28:21.692252   30218 command_runner.go:130] > NAME=Buildroot
	I1207 20:28:21.692268   30218 command_runner.go:130] > VERSION=2021.02.12-1-ge2b7375-dirty
	I1207 20:28:21.692273   30218 command_runner.go:130] > ID=buildroot
	I1207 20:28:21.692279   30218 command_runner.go:130] > VERSION_ID=2021.02.12
	I1207 20:28:21.692283   30218 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1207 20:28:21.692593   30218 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:28:21.692612   30218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:28:21.692693   30218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:28:21.692758   30218 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:28:21.692767   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:28:21.692846   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:28:21.701600   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:28:21.726449   30218 start.go:303] post-start completed in 125.648159ms
	I1207 20:28:21.726489   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetConfigRaw
	I1207 20:28:21.727040   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:28:21.729525   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.729918   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.729970   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.730154   30218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:28:21.730389   30218 start.go:128] duration metric: createHost completed in 23.964632723s
	I1207 20:28:21.730412   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:21.732525   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.732959   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.732987   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.733135   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.733344   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.733515   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.733663   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.733804   30218 main.go:141] libmachine: Using SSH client type: native
	I1207 20:28:21.734156   30218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:28:21.734171   30218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:28:21.846718   30218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701980901.815163124
	
	I1207 20:28:21.846741   30218 fix.go:206] guest clock: 1701980901.815163124
	I1207 20:28:21.846751   30218 fix.go:219] Guest: 2023-12-07 20:28:21.815163124 +0000 UTC Remote: 2023-12-07 20:28:21.730401212 +0000 UTC m=+92.656184294 (delta=84.761912ms)
	I1207 20:28:21.846770   30218 fix.go:190] guest clock delta is within tolerance: 84.761912ms
	I1207 20:28:21.846775   30218 start.go:83] releasing machines lock for "multinode-660958-m02", held for 24.081093383s
	I1207 20:28:21.846795   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:21.847063   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:28:21.849499   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.849837   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.849857   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.852283   30218 out.go:177] * Found network options:
	I1207 20:28:21.853844   30218 out.go:177]   - NO_PROXY=192.168.39.19
	W1207 20:28:21.855412   30218 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:28:21.855455   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:21.855946   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:21.856129   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:28:21.856206   30218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:28:21.856249   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	W1207 20:28:21.856328   30218 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:28:21.856392   30218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:28:21.856414   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:28:21.858835   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.859100   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.859232   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.859264   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.859363   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.859370   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:21.859397   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:21.859561   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.859572   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:28:21.859719   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.859725   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:28:21.859860   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:28:21.859857   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:28:21.860000   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:28:21.964564   30218 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1207 20:28:22.098653   30218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:28:22.104868   30218 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1207 20:28:22.104993   30218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:28:22.105067   30218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:28:22.121298   30218 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1207 20:28:22.121360   30218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:28:22.121367   30218 start.go:475] detecting cgroup driver to use...
	I1207 20:28:22.121424   30218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:28:22.138626   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:28:22.151537   30218 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:28:22.151600   30218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:28:22.165426   30218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:28:22.179104   30218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:28:22.297529   30218 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1207 20:28:22.297603   30218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:28:22.423288   30218 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1207 20:28:22.423374   30218 docker.go:219] disabling docker service ...
	I1207 20:28:22.423451   30218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:28:22.438083   30218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:28:22.449776   30218 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1207 20:28:22.449954   30218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:28:22.557674   30218 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1207 20:28:22.557744   30218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:28:22.574268   30218 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1207 20:28:22.574746   30218 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1207 20:28:22.667262   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:28:22.680371   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:28:22.698190   30218 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1207 20:28:22.698616   30218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:28:22.698671   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:28:22.708325   30218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:28:22.708390   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:28:22.717737   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:28:22.726886   30218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:28:22.736318   30218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:28:22.745355   30218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:28:22.753062   30218 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:28:22.753111   30218 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:28:22.753162   30218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:28:22.765527   30218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:28:22.774013   30218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:28:22.875536   30218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:28:23.044320   30218 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:28:23.044404   30218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:28:23.049081   30218 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1207 20:28:23.049109   30218 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1207 20:28:23.049120   30218 command_runner.go:130] > Device: 16h/22d	Inode: 720         Links: 1
	I1207 20:28:23.049130   30218 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:28:23.049138   30218 command_runner.go:130] > Access: 2023-12-07 20:28:23.002145302 +0000
	I1207 20:28:23.049147   30218 command_runner.go:130] > Modify: 2023-12-07 20:28:23.002145302 +0000
	I1207 20:28:23.049156   30218 command_runner.go:130] > Change: 2023-12-07 20:28:23.002145302 +0000
	I1207 20:28:23.049163   30218 command_runner.go:130] >  Birth: -
	I1207 20:28:23.049183   30218 start.go:543] Will wait 60s for crictl version
	I1207 20:28:23.049226   30218 ssh_runner.go:195] Run: which crictl
	I1207 20:28:23.052824   30218 command_runner.go:130] > /usr/bin/crictl
	I1207 20:28:23.052878   30218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:28:23.098315   30218 command_runner.go:130] > Version:  0.1.0
	I1207 20:28:23.098342   30218 command_runner.go:130] > RuntimeName:  cri-o
	I1207 20:28:23.098349   30218 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1207 20:28:23.098362   30218 command_runner.go:130] > RuntimeApiVersion:  v1
	I1207 20:28:23.098410   30218 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:28:23.098482   30218 ssh_runner.go:195] Run: crio --version
	I1207 20:28:23.143672   30218 command_runner.go:130] > crio version 1.24.1
	I1207 20:28:23.143692   30218 command_runner.go:130] > Version:          1.24.1
	I1207 20:28:23.143702   30218 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:28:23.143708   30218 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:28:23.143715   30218 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:28:23.143723   30218 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:28:23.143729   30218 command_runner.go:130] > Compiler:         gc
	I1207 20:28:23.143736   30218 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:28:23.143745   30218 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:28:23.143761   30218 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:28:23.143776   30218 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:28:23.143783   30218 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:28:23.143905   30218 ssh_runner.go:195] Run: crio --version
	I1207 20:28:23.192320   30218 command_runner.go:130] > crio version 1.24.1
	I1207 20:28:23.192348   30218 command_runner.go:130] > Version:          1.24.1
	I1207 20:28:23.192360   30218 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:28:23.192369   30218 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:28:23.192382   30218 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:28:23.192390   30218 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:28:23.192398   30218 command_runner.go:130] > Compiler:         gc
	I1207 20:28:23.192416   30218 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:28:23.192427   30218 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:28:23.192443   30218 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:28:23.192454   30218 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:28:23.192465   30218 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:28:23.195403   30218 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:28:23.197073   30218 out.go:177]   - env NO_PROXY=192.168.39.19
	I1207 20:28:23.198506   30218 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:28:23.201303   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:23.201634   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:28:23.201663   30218 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:28:23.201851   30218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:28:23.205836   30218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:28:23.219417   30218 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958 for IP: 192.168.39.69
	I1207 20:28:23.219453   30218 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:28:23.219626   30218 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:28:23.219685   30218 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:28:23.219702   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:28:23.219722   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:28:23.219738   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:28:23.219756   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:28:23.219824   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:28:23.219860   30218 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:28:23.219875   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:28:23.219910   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:28:23.219941   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:28:23.219974   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:28:23.220030   30218 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:28:23.220064   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:28:23.220092   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:28:23.220113   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:28:23.220478   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:28:23.243890   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:28:23.267253   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:28:23.290214   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:28:23.313992   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:28:23.337194   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:28:23.360058   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:28:23.382848   30218 ssh_runner.go:195] Run: openssl version
	I1207 20:28:23.388294   30218 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1207 20:28:23.388476   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:28:23.398204   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:28:23.402701   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:28:23.402910   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:28:23.402953   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:28:23.408144   30218 command_runner.go:130] > 51391683
	I1207 20:28:23.408365   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:28:23.418289   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:28:23.428750   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:28:23.433256   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:28:23.433314   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:28:23.433372   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:28:23.438977   30218 command_runner.go:130] > 3ec20f2e
	I1207 20:28:23.439197   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:28:23.449522   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:28:23.462194   30218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:28:23.466979   30218 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:28:23.467004   30218 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:28:23.467041   30218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:28:23.472321   30218 command_runner.go:130] > b5213941
	I1207 20:28:23.472768   30218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:28:23.482481   30218 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:28:23.486342   30218 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:28:23.486387   30218 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:28:23.486483   30218 ssh_runner.go:195] Run: crio config
	I1207 20:28:23.537018   30218 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1207 20:28:23.537048   30218 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1207 20:28:23.537057   30218 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1207 20:28:23.537062   30218 command_runner.go:130] > #
	I1207 20:28:23.537073   30218 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1207 20:28:23.537083   30218 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1207 20:28:23.537096   30218 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1207 20:28:23.537107   30218 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1207 20:28:23.537113   30218 command_runner.go:130] > # reload'.
	I1207 20:28:23.537127   30218 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1207 20:28:23.537138   30218 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1207 20:28:23.537149   30218 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1207 20:28:23.537162   30218 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1207 20:28:23.537168   30218 command_runner.go:130] > [crio]
	I1207 20:28:23.537177   30218 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1207 20:28:23.537189   30218 command_runner.go:130] > # containers images, in this directory.
	I1207 20:28:23.537220   30218 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1207 20:28:23.537238   30218 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1207 20:28:23.537247   30218 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1207 20:28:23.537265   30218 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1207 20:28:23.537278   30218 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1207 20:28:23.537558   30218 command_runner.go:130] > storage_driver = "overlay"
	I1207 20:28:23.537571   30218 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1207 20:28:23.537576   30218 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1207 20:28:23.537581   30218 command_runner.go:130] > storage_option = [
	I1207 20:28:23.537732   30218 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1207 20:28:23.537766   30218 command_runner.go:130] > ]
	I1207 20:28:23.537781   30218 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1207 20:28:23.537794   30218 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1207 20:28:23.538290   30218 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1207 20:28:23.538307   30218 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1207 20:28:23.538317   30218 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1207 20:28:23.538325   30218 command_runner.go:130] > # always happen on a node reboot
	I1207 20:28:23.538721   30218 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1207 20:28:23.538736   30218 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1207 20:28:23.538746   30218 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1207 20:28:23.538760   30218 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1207 20:28:23.539122   30218 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1207 20:28:23.539140   30218 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1207 20:28:23.539153   30218 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1207 20:28:23.539637   30218 command_runner.go:130] > # internal_wipe = true
	I1207 20:28:23.539652   30218 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1207 20:28:23.539661   30218 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1207 20:28:23.539667   30218 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1207 20:28:23.540033   30218 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1207 20:28:23.540048   30218 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1207 20:28:23.540055   30218 command_runner.go:130] > [crio.api]
	I1207 20:28:23.540064   30218 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1207 20:28:23.540519   30218 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1207 20:28:23.540539   30218 command_runner.go:130] > # IP address on which the stream server will listen.
	I1207 20:28:23.540781   30218 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1207 20:28:23.540794   30218 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1207 20:28:23.540799   30218 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1207 20:28:23.541222   30218 command_runner.go:130] > # stream_port = "0"
	I1207 20:28:23.541240   30218 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1207 20:28:23.541757   30218 command_runner.go:130] > # stream_enable_tls = false
	I1207 20:28:23.541770   30218 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1207 20:28:23.542208   30218 command_runner.go:130] > # stream_idle_timeout = ""
	I1207 20:28:23.542227   30218 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1207 20:28:23.542237   30218 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1207 20:28:23.542243   30218 command_runner.go:130] > # minutes.
	I1207 20:28:23.542595   30218 command_runner.go:130] > # stream_tls_cert = ""
	I1207 20:28:23.542610   30218 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1207 20:28:23.542620   30218 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1207 20:28:23.542878   30218 command_runner.go:130] > # stream_tls_key = ""
	I1207 20:28:23.542894   30218 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1207 20:28:23.542904   30218 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1207 20:28:23.542913   30218 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1207 20:28:23.543172   30218 command_runner.go:130] > # stream_tls_ca = ""
	I1207 20:28:23.543194   30218 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:28:23.544663   30218 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1207 20:28:23.544692   30218 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:28:23.544697   30218 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1207 20:28:23.544709   30218 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1207 20:28:23.544716   30218 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1207 20:28:23.544723   30218 command_runner.go:130] > [crio.runtime]
	I1207 20:28:23.544729   30218 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1207 20:28:23.544736   30218 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1207 20:28:23.544741   30218 command_runner.go:130] > # "nofile=1024:2048"
	I1207 20:28:23.544749   30218 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1207 20:28:23.544755   30218 command_runner.go:130] > # default_ulimits = [
	I1207 20:28:23.544759   30218 command_runner.go:130] > # ]
	I1207 20:28:23.544768   30218 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1207 20:28:23.544774   30218 command_runner.go:130] > # no_pivot = false
	I1207 20:28:23.544780   30218 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1207 20:28:23.544788   30218 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1207 20:28:23.544795   30218 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1207 20:28:23.544804   30218 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1207 20:28:23.544809   30218 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1207 20:28:23.544818   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:28:23.544825   30218 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1207 20:28:23.544829   30218 command_runner.go:130] > # Cgroup setting for conmon
	I1207 20:28:23.544838   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1207 20:28:23.544844   30218 command_runner.go:130] > conmon_cgroup = "pod"
	I1207 20:28:23.544851   30218 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1207 20:28:23.544858   30218 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1207 20:28:23.544867   30218 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:28:23.544873   30218 command_runner.go:130] > conmon_env = [
	I1207 20:28:23.544879   30218 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1207 20:28:23.544885   30218 command_runner.go:130] > ]
	I1207 20:28:23.544891   30218 command_runner.go:130] > # Additional environment variables to set for all the
	I1207 20:28:23.544898   30218 command_runner.go:130] > # containers. These are overridden if set in the
	I1207 20:28:23.544904   30218 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1207 20:28:23.544908   30218 command_runner.go:130] > # default_env = [
	I1207 20:28:23.544911   30218 command_runner.go:130] > # ]
	I1207 20:28:23.544921   30218 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1207 20:28:23.544925   30218 command_runner.go:130] > # selinux = false
	I1207 20:28:23.544934   30218 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1207 20:28:23.544940   30218 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1207 20:28:23.544949   30218 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1207 20:28:23.544953   30218 command_runner.go:130] > # seccomp_profile = ""
	I1207 20:28:23.544959   30218 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1207 20:28:23.544967   30218 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1207 20:28:23.544987   30218 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1207 20:28:23.544995   30218 command_runner.go:130] > # which might increase security.
	I1207 20:28:23.544999   30218 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1207 20:28:23.545005   30218 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1207 20:28:23.545013   30218 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1207 20:28:23.545023   30218 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1207 20:28:23.545032   30218 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1207 20:28:23.545039   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:28:23.545044   30218 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1207 20:28:23.545052   30218 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1207 20:28:23.545057   30218 command_runner.go:130] > # the cgroup blockio controller.
	I1207 20:28:23.545065   30218 command_runner.go:130] > # blockio_config_file = ""
	I1207 20:28:23.545073   30218 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1207 20:28:23.545079   30218 command_runner.go:130] > # irqbalance daemon.
	I1207 20:28:23.545085   30218 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1207 20:28:23.545093   30218 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1207 20:28:23.545101   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:28:23.545105   30218 command_runner.go:130] > # rdt_config_file = ""
	I1207 20:28:23.545114   30218 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1207 20:28:23.545129   30218 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1207 20:28:23.545139   30218 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1207 20:28:23.545148   30218 command_runner.go:130] > # separate_pull_cgroup = ""
	I1207 20:28:23.545162   30218 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1207 20:28:23.545175   30218 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1207 20:28:23.545182   30218 command_runner.go:130] > # will be added.
	I1207 20:28:23.545187   30218 command_runner.go:130] > # default_capabilities = [
	I1207 20:28:23.545193   30218 command_runner.go:130] > # 	"CHOWN",
	I1207 20:28:23.545197   30218 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1207 20:28:23.545202   30218 command_runner.go:130] > # 	"FSETID",
	I1207 20:28:23.545206   30218 command_runner.go:130] > # 	"FOWNER",
	I1207 20:28:23.545210   30218 command_runner.go:130] > # 	"SETGID",
	I1207 20:28:23.545214   30218 command_runner.go:130] > # 	"SETUID",
	I1207 20:28:23.545220   30218 command_runner.go:130] > # 	"SETPCAP",
	I1207 20:28:23.545225   30218 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1207 20:28:23.545233   30218 command_runner.go:130] > # 	"KILL",
	I1207 20:28:23.545239   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545252   30218 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1207 20:28:23.545265   30218 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:28:23.545274   30218 command_runner.go:130] > # default_sysctls = [
	I1207 20:28:23.545283   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545291   30218 command_runner.go:130] > # List of devices on the host that a
	I1207 20:28:23.545304   30218 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1207 20:28:23.545312   30218 command_runner.go:130] > # allowed_devices = [
	I1207 20:28:23.545321   30218 command_runner.go:130] > # 	"/dev/fuse",
	I1207 20:28:23.545327   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545338   30218 command_runner.go:130] > # List of additional devices. specified as
	I1207 20:28:23.545348   30218 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1207 20:28:23.545356   30218 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1207 20:28:23.545373   30218 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:28:23.545380   30218 command_runner.go:130] > # additional_devices = [
	I1207 20:28:23.545383   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545389   30218 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1207 20:28:23.545396   30218 command_runner.go:130] > # cdi_spec_dirs = [
	I1207 20:28:23.545399   30218 command_runner.go:130] > # 	"/etc/cdi",
	I1207 20:28:23.545403   30218 command_runner.go:130] > # 	"/var/run/cdi",
	I1207 20:28:23.545409   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545417   30218 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1207 20:28:23.545425   30218 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1207 20:28:23.545429   30218 command_runner.go:130] > # Defaults to false.
	I1207 20:28:23.545434   30218 command_runner.go:130] > # device_ownership_from_security_context = false
	I1207 20:28:23.545441   30218 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1207 20:28:23.545449   30218 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1207 20:28:23.545455   30218 command_runner.go:130] > # hooks_dir = [
	I1207 20:28:23.545460   30218 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1207 20:28:23.545466   30218 command_runner.go:130] > # ]
	I1207 20:28:23.545473   30218 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1207 20:28:23.545480   30218 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1207 20:28:23.545487   30218 command_runner.go:130] > # its default mounts from the following two files:
	I1207 20:28:23.545490   30218 command_runner.go:130] > #
	I1207 20:28:23.545497   30218 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1207 20:28:23.545505   30218 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1207 20:28:23.545510   30218 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1207 20:28:23.545516   30218 command_runner.go:130] > #
	I1207 20:28:23.545522   30218 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1207 20:28:23.545530   30218 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1207 20:28:23.545536   30218 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1207 20:28:23.545543   30218 command_runner.go:130] > #      only add mounts it finds in this file.
	I1207 20:28:23.545547   30218 command_runner.go:130] > #
	I1207 20:28:23.545558   30218 command_runner.go:130] > # default_mounts_file = ""
	I1207 20:28:23.545563   30218 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1207 20:28:23.545572   30218 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1207 20:28:23.545576   30218 command_runner.go:130] > pids_limit = 1024
	I1207 20:28:23.545584   30218 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1207 20:28:23.545591   30218 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1207 20:28:23.545599   30218 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1207 20:28:23.545607   30218 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1207 20:28:23.545614   30218 command_runner.go:130] > # log_size_max = -1
	I1207 20:28:23.545620   30218 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1207 20:28:23.545627   30218 command_runner.go:130] > # log_to_journald = false
	I1207 20:28:23.545652   30218 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1207 20:28:23.545659   30218 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1207 20:28:23.545664   30218 command_runner.go:130] > # Path to directory for container attach sockets.
	I1207 20:28:23.545670   30218 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1207 20:28:23.545675   30218 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1207 20:28:23.545682   30218 command_runner.go:130] > # bind_mount_prefix = ""
	I1207 20:28:23.545687   30218 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1207 20:28:23.545691   30218 command_runner.go:130] > # read_only = false
	I1207 20:28:23.545700   30218 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1207 20:28:23.545708   30218 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1207 20:28:23.545714   30218 command_runner.go:130] > # live configuration reload.
	I1207 20:28:23.545719   30218 command_runner.go:130] > # log_level = "info"
	I1207 20:28:23.545727   30218 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1207 20:28:23.545732   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:28:23.545738   30218 command_runner.go:130] > # log_filter = ""
	I1207 20:28:23.545744   30218 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1207 20:28:23.545752   30218 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1207 20:28:23.545756   30218 command_runner.go:130] > # separated by comma.
	I1207 20:28:23.545762   30218 command_runner.go:130] > # uid_mappings = ""
	I1207 20:28:23.545768   30218 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1207 20:28:23.545776   30218 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1207 20:28:23.545780   30218 command_runner.go:130] > # separated by comma.
	I1207 20:28:23.545784   30218 command_runner.go:130] > # gid_mappings = ""
	I1207 20:28:23.545792   30218 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1207 20:28:23.545800   30218 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:28:23.545806   30218 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:28:23.545813   30218 command_runner.go:130] > # minimum_mappable_uid = -1
	I1207 20:28:23.545818   30218 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1207 20:28:23.545826   30218 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:28:23.545832   30218 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:28:23.545839   30218 command_runner.go:130] > # minimum_mappable_gid = -1
	I1207 20:28:23.545844   30218 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1207 20:28:23.545853   30218 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1207 20:28:23.545858   30218 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1207 20:28:23.545865   30218 command_runner.go:130] > # ctr_stop_timeout = 30
	I1207 20:28:23.545870   30218 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1207 20:28:23.545879   30218 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1207 20:28:23.545884   30218 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1207 20:28:23.545891   30218 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1207 20:28:23.545895   30218 command_runner.go:130] > drop_infra_ctr = false
	I1207 20:28:23.545904   30218 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1207 20:28:23.545909   30218 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1207 20:28:23.545918   30218 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1207 20:28:23.545938   30218 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1207 20:28:23.545952   30218 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1207 20:28:23.545957   30218 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1207 20:28:23.545964   30218 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1207 20:28:23.545972   30218 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1207 20:28:23.545979   30218 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1207 20:28:23.545985   30218 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1207 20:28:23.545997   30218 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1207 20:28:23.546010   30218 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1207 20:28:23.546020   30218 command_runner.go:130] > # default_runtime = "runc"
	I1207 20:28:23.546031   30218 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1207 20:28:23.546045   30218 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1207 20:28:23.546062   30218 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1207 20:28:23.546070   30218 command_runner.go:130] > # creation as a file is not desired either.
	I1207 20:28:23.546078   30218 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1207 20:28:23.546086   30218 command_runner.go:130] > # the hostname is being managed dynamically.
	I1207 20:28:23.546090   30218 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1207 20:28:23.546094   30218 command_runner.go:130] > # ]
	I1207 20:28:23.546100   30218 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1207 20:28:23.546108   30218 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1207 20:28:23.546115   30218 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1207 20:28:23.546123   30218 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1207 20:28:23.546126   30218 command_runner.go:130] > #
	I1207 20:28:23.546131   30218 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1207 20:28:23.546139   30218 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1207 20:28:23.546143   30218 command_runner.go:130] > #  runtime_type = "oci"
	I1207 20:28:23.546150   30218 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1207 20:28:23.546155   30218 command_runner.go:130] > #  privileged_without_host_devices = false
	I1207 20:28:23.546160   30218 command_runner.go:130] > #  allowed_annotations = []
	I1207 20:28:23.546164   30218 command_runner.go:130] > # Where:
	I1207 20:28:23.546171   30218 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1207 20:28:23.546177   30218 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1207 20:28:23.546202   30218 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1207 20:28:23.546210   30218 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1207 20:28:23.546214   30218 command_runner.go:130] > #   in $PATH.
	I1207 20:28:23.546222   30218 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1207 20:28:23.546231   30218 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1207 20:28:23.546244   30218 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1207 20:28:23.546254   30218 command_runner.go:130] > #   state.
	I1207 20:28:23.546263   30218 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1207 20:28:23.546277   30218 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1207 20:28:23.546290   30218 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1207 20:28:23.546302   30218 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1207 20:28:23.546315   30218 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1207 20:28:23.546329   30218 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1207 20:28:23.546339   30218 command_runner.go:130] > #   The currently recognized values are:
	I1207 20:28:23.546346   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1207 20:28:23.546358   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1207 20:28:23.546371   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1207 20:28:23.546383   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1207 20:28:23.546398   30218 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1207 20:28:23.546411   30218 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1207 20:28:23.546425   30218 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1207 20:28:23.546439   30218 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1207 20:28:23.546450   30218 command_runner.go:130] > #   should be moved to the container's cgroup
	I1207 20:28:23.546461   30218 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1207 20:28:23.546471   30218 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1207 20:28:23.546481   30218 command_runner.go:130] > runtime_type = "oci"
	I1207 20:28:23.546488   30218 command_runner.go:130] > runtime_root = "/run/runc"
	I1207 20:28:23.546498   30218 command_runner.go:130] > runtime_config_path = ""
	I1207 20:28:23.546506   30218 command_runner.go:130] > monitor_path = ""
	I1207 20:28:23.546516   30218 command_runner.go:130] > monitor_cgroup = ""
	I1207 20:28:23.546524   30218 command_runner.go:130] > monitor_exec_cgroup = ""
	I1207 20:28:23.546537   30218 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1207 20:28:23.546547   30218 command_runner.go:130] > # running containers
	I1207 20:28:23.546563   30218 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1207 20:28:23.546576   30218 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1207 20:28:23.546605   30218 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1207 20:28:23.546618   30218 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1207 20:28:23.546626   30218 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1207 20:28:23.546636   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1207 20:28:23.546647   30218 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1207 20:28:23.546657   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1207 20:28:23.546669   30218 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1207 20:28:23.546679   30218 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1207 20:28:23.546689   30218 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1207 20:28:23.546702   30218 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1207 20:28:23.546717   30218 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1207 20:28:23.546732   30218 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1207 20:28:23.546747   30218 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1207 20:28:23.546759   30218 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1207 20:28:23.546777   30218 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1207 20:28:23.546793   30218 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1207 20:28:23.546805   30218 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1207 20:28:23.546820   30218 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1207 20:28:23.546829   30218 command_runner.go:130] > # Example:
	I1207 20:28:23.546837   30218 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1207 20:28:23.546847   30218 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1207 20:28:23.546854   30218 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1207 20:28:23.546859   30218 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1207 20:28:23.546864   30218 command_runner.go:130] > # cpuset = 0
	I1207 20:28:23.546868   30218 command_runner.go:130] > # cpushares = "0-1"
	I1207 20:28:23.546872   30218 command_runner.go:130] > # Where:
	I1207 20:28:23.546878   30218 command_runner.go:130] > # The workload name is workload-type.
	I1207 20:28:23.546885   30218 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1207 20:28:23.546893   30218 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1207 20:28:23.546899   30218 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1207 20:28:23.546908   30218 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1207 20:28:23.546916   30218 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1207 20:28:23.546919   30218 command_runner.go:130] > # 
	I1207 20:28:23.546929   30218 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1207 20:28:23.546935   30218 command_runner.go:130] > #
	I1207 20:28:23.546941   30218 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1207 20:28:23.546949   30218 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1207 20:28:23.546955   30218 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1207 20:28:23.546963   30218 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1207 20:28:23.546969   30218 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1207 20:28:23.546973   30218 command_runner.go:130] > [crio.image]
	I1207 20:28:23.546979   30218 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1207 20:28:23.546986   30218 command_runner.go:130] > # default_transport = "docker://"
	I1207 20:28:23.546992   30218 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1207 20:28:23.547001   30218 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:28:23.547005   30218 command_runner.go:130] > # global_auth_file = ""
	I1207 20:28:23.547013   30218 command_runner.go:130] > # The image used to instantiate infra containers.
	I1207 20:28:23.547019   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:28:23.547026   30218 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1207 20:28:23.547032   30218 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1207 20:28:23.547041   30218 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:28:23.547048   30218 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:28:23.547053   30218 command_runner.go:130] > # pause_image_auth_file = ""
	I1207 20:28:23.547060   30218 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1207 20:28:23.547066   30218 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1207 20:28:23.547072   30218 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1207 20:28:23.547080   30218 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1207 20:28:23.547085   30218 command_runner.go:130] > # pause_command = "/pause"
	I1207 20:28:23.547091   30218 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1207 20:28:23.547101   30218 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1207 20:28:23.547109   30218 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1207 20:28:23.547116   30218 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1207 20:28:23.547123   30218 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1207 20:28:23.547128   30218 command_runner.go:130] > # signature_policy = ""
	I1207 20:28:23.547136   30218 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1207 20:28:23.547142   30218 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1207 20:28:23.547148   30218 command_runner.go:130] > # changing them here.
	I1207 20:28:23.547153   30218 command_runner.go:130] > # insecure_registries = [
	I1207 20:28:23.547158   30218 command_runner.go:130] > # ]
	I1207 20:28:23.547164   30218 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1207 20:28:23.547169   30218 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1207 20:28:23.547175   30218 command_runner.go:130] > # image_volumes = "mkdir"
	I1207 20:28:23.547180   30218 command_runner.go:130] > # Temporary directory to use for storing big files
	I1207 20:28:23.547186   30218 command_runner.go:130] > # big_files_temporary_dir = ""
	I1207 20:28:23.547192   30218 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1207 20:28:23.547198   30218 command_runner.go:130] > # CNI plugins.
	I1207 20:28:23.547202   30218 command_runner.go:130] > [crio.network]
	I1207 20:28:23.547210   30218 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1207 20:28:23.547215   30218 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1207 20:28:23.547221   30218 command_runner.go:130] > # cni_default_network = ""
	I1207 20:28:23.547231   30218 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1207 20:28:23.547242   30218 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1207 20:28:23.547255   30218 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1207 20:28:23.547264   30218 command_runner.go:130] > # plugin_dirs = [
	I1207 20:28:23.547272   30218 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1207 20:28:23.547281   30218 command_runner.go:130] > # ]
	I1207 20:28:23.547290   30218 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1207 20:28:23.547300   30218 command_runner.go:130] > [crio.metrics]
	I1207 20:28:23.547308   30218 command_runner.go:130] > # Globally enable or disable metrics support.
	I1207 20:28:23.547316   30218 command_runner.go:130] > enable_metrics = true
	I1207 20:28:23.547322   30218 command_runner.go:130] > # Specify enabled metrics collectors.
	I1207 20:28:23.547329   30218 command_runner.go:130] > # Per default all metrics are enabled.
	I1207 20:28:23.547335   30218 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1207 20:28:23.547344   30218 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1207 20:28:23.547351   30218 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1207 20:28:23.547356   30218 command_runner.go:130] > # metrics_collectors = [
	I1207 20:28:23.547362   30218 command_runner.go:130] > # 	"operations",
	I1207 20:28:23.547367   30218 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1207 20:28:23.547371   30218 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1207 20:28:23.547379   30218 command_runner.go:130] > # 	"operations_errors",
	I1207 20:28:23.547383   30218 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1207 20:28:23.547387   30218 command_runner.go:130] > # 	"image_pulls_by_name",
	I1207 20:28:23.547393   30218 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1207 20:28:23.547398   30218 command_runner.go:130] > # 	"image_pulls_failures",
	I1207 20:28:23.547404   30218 command_runner.go:130] > # 	"image_pulls_successes",
	I1207 20:28:23.547409   30218 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1207 20:28:23.547415   30218 command_runner.go:130] > # 	"image_layer_reuse",
	I1207 20:28:23.547420   30218 command_runner.go:130] > # 	"containers_oom_total",
	I1207 20:28:23.547426   30218 command_runner.go:130] > # 	"containers_oom",
	I1207 20:28:23.547430   30218 command_runner.go:130] > # 	"processes_defunct",
	I1207 20:28:23.547435   30218 command_runner.go:130] > # 	"operations_total",
	I1207 20:28:23.547439   30218 command_runner.go:130] > # 	"operations_latency_seconds",
	I1207 20:28:23.547446   30218 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1207 20:28:23.547450   30218 command_runner.go:130] > # 	"operations_errors_total",
	I1207 20:28:23.547455   30218 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1207 20:28:23.547460   30218 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1207 20:28:23.547467   30218 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1207 20:28:23.547472   30218 command_runner.go:130] > # 	"image_pulls_success_total",
	I1207 20:28:23.547476   30218 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1207 20:28:23.547483   30218 command_runner.go:130] > # 	"containers_oom_count_total",
	I1207 20:28:23.547486   30218 command_runner.go:130] > # ]
	I1207 20:28:23.547491   30218 command_runner.go:130] > # The port on which the metrics server will listen.
	I1207 20:28:23.547497   30218 command_runner.go:130] > # metrics_port = 9090
	I1207 20:28:23.547502   30218 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1207 20:28:23.547509   30218 command_runner.go:130] > # metrics_socket = ""
	I1207 20:28:23.547515   30218 command_runner.go:130] > # The certificate for the secure metrics server.
	I1207 20:28:23.547523   30218 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1207 20:28:23.547529   30218 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1207 20:28:23.547536   30218 command_runner.go:130] > # certificate on any modification event.
	I1207 20:28:23.547540   30218 command_runner.go:130] > # metrics_cert = ""
	I1207 20:28:23.547552   30218 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1207 20:28:23.547559   30218 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1207 20:28:23.547564   30218 command_runner.go:130] > # metrics_key = ""
	I1207 20:28:23.547572   30218 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1207 20:28:23.547576   30218 command_runner.go:130] > [crio.tracing]
	I1207 20:28:23.547583   30218 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1207 20:28:23.547588   30218 command_runner.go:130] > # enable_tracing = false
	I1207 20:28:23.547594   30218 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1207 20:28:23.547600   30218 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1207 20:28:23.547607   30218 command_runner.go:130] > # Number of samples to collect per million spans.
	I1207 20:28:23.547612   30218 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1207 20:28:23.547620   30218 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1207 20:28:23.547624   30218 command_runner.go:130] > [crio.stats]
	I1207 20:28:23.547630   30218 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1207 20:28:23.547638   30218 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1207 20:28:23.547643   30218 command_runner.go:130] > # stats_collection_period = 0
	I1207 20:28:23.547679   30218 command_runner.go:130] ! time="2023-12-07 20:28:23.505490875Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1207 20:28:23.547697   30218 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1207 20:28:23.547749   30218 cni.go:84] Creating CNI manager for ""
	I1207 20:28:23.547758   30218 cni.go:136] 2 nodes found, recommending kindnet
	I1207 20:28:23.547768   30218 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:28:23.547785   30218 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-660958 NodeName:multinode-660958-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:28:23.547877   30218 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-660958-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:28:23.547921   30218 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-660958-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:28:23.547967   30218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:28:23.556934   30218 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1207 20:28:23.557061   30218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1207 20:28:23.557122   30218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1207 20:28:23.565719   30218 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1207 20:28:23.565748   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1207 20:28:23.565782   30218 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1207 20:28:23.565809   30218 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1207 20:28:23.565820   30218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1207 20:28:23.570135   30218 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1207 20:28:23.570460   30218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1207 20:28:23.570492   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1207 20:28:31.168549   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1207 20:28:31.168629   30218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1207 20:28:31.173855   30218 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1207 20:28:31.173910   30218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1207 20:28:31.173966   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1207 20:28:33.361857   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:28:33.376487   30218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1207 20:28:33.376576   30218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1207 20:28:33.381177   30218 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1207 20:28:33.381220   30218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1207 20:28:33.381245   30218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1207 20:28:33.902148   30218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 20:28:33.911653   30218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1207 20:28:33.927194   30218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:28:33.942927   30218 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I1207 20:28:33.946862   30218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:28:33.958471   30218 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:28:33.958711   30218 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:28:33.958775   30218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:28:33.958807   30218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:28:33.972822   30218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I1207 20:28:33.973262   30218 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:28:33.973700   30218 main.go:141] libmachine: Using API Version  1
	I1207 20:28:33.973720   30218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:28:33.974038   30218 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:28:33.974228   30218 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:28:33.974387   30218 start.go:304] JoinCluster: &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:28:33.974496   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1207 20:28:33.974511   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:28:33.977155   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:28:33.977542   30218 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:28:33.977571   30218 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:28:33.977722   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:28:33.977892   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:28:33.978055   30218 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:28:33.978210   30218 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:28:34.141946   30218 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yvcxd8.fg1tfnbuadrd3l0n --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:28:34.142085   30218 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:28:34.142135   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvcxd8.fg1tfnbuadrd3l0n --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-660958-m02"
	I1207 20:28:34.193138   30218 command_runner.go:130] ! W1207 20:28:34.176927     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1207 20:28:34.317260   30218 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:28:36.523736   30218 command_runner.go:130] > [preflight] Running pre-flight checks
	I1207 20:28:36.523768   30218 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1207 20:28:36.523782   30218 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1207 20:28:36.523806   30218 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:28:36.523818   30218 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:28:36.523833   30218 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1207 20:28:36.523842   30218 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1207 20:28:36.523852   30218 command_runner.go:130] > This node has joined the cluster:
	I1207 20:28:36.523862   30218 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1207 20:28:36.523871   30218 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1207 20:28:36.523881   30218 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1207 20:28:36.523909   30218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yvcxd8.fg1tfnbuadrd3l0n --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-660958-m02": (2.38175381s)
	I1207 20:28:36.523937   30218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1207 20:28:36.661066   30218 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1207 20:28:36.780076   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=multinode-660958 minikube.k8s.io/updated_at=2023_12_07T20_28_36_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:28:36.900687   30218 command_runner.go:130] > node/multinode-660958-m02 labeled
	I1207 20:28:36.902721   30218 start.go:306] JoinCluster complete in 2.928331465s
	I1207 20:28:36.902746   30218 cni.go:84] Creating CNI manager for ""
	I1207 20:28:36.902752   30218 cni.go:136] 2 nodes found, recommending kindnet
	I1207 20:28:36.902817   30218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 20:28:36.908972   30218 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1207 20:28:36.908996   30218 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1207 20:28:36.909005   30218 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1207 20:28:36.909015   30218 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:28:36.909027   30218 command_runner.go:130] > Access: 2023-12-07 20:27:02.626750912 +0000
	I1207 20:28:36.909036   30218 command_runner.go:130] > Modify: 2023-12-05 19:27:41.000000000 +0000
	I1207 20:28:36.909045   30218 command_runner.go:130] > Change: 2023-12-07 20:27:00.736750912 +0000
	I1207 20:28:36.909052   30218 command_runner.go:130] >  Birth: -
	I1207 20:28:36.909197   30218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 20:28:36.909210   30218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 20:28:36.929655   30218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 20:28:37.220418   30218 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:28:37.225395   30218 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:28:37.229420   30218 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1207 20:28:37.242238   30218 command_runner.go:130] > daemonset.apps/kindnet configured
	I1207 20:28:37.245627   30218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:28:37.245890   30218 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:28:37.246237   30218 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:28:37.246254   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:37.246266   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:37.246285   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:37.252745   30218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1207 20:28:37.252769   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:37.252778   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:37.252787   30218 round_trippers.go:580]     Content-Length: 291
	I1207 20:28:37.252801   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:37 GMT
	I1207 20:28:37.252810   30218 round_trippers.go:580]     Audit-Id: e107bd0a-7352-4fed-ae78-723bbf2eb17e
	I1207 20:28:37.252821   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:37.252831   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:37.252843   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:37.252869   30218 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"408","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1207 20:28:37.253034   30218 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-660958" context rescaled to 1 replicas
	I1207 20:28:37.253071   30218 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:28:37.256222   30218 out.go:177] * Verifying Kubernetes components...
	I1207 20:28:37.257713   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:28:37.271057   30218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:28:37.271397   30218 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:28:37.271708   30218 node_ready.go:35] waiting up to 6m0s for node "multinode-660958-m02" to be "Ready" ...
	I1207 20:28:37.271779   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:37.271790   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:37.271802   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:37.271811   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:37.274575   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:37.274599   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:37.274609   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:37.274617   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:37.274625   30218 round_trippers.go:580]     Content-Length: 4082
	I1207 20:28:37.274639   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:37 GMT
	I1207 20:28:37.274655   30218 round_trippers.go:580]     Audit-Id: 4dc2012d-101e-4fcc-84af-7d0f81bfe300
	I1207 20:28:37.274680   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:37.274692   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:37.274782   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"467","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1207 20:28:37.275193   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:37.275211   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:37.275218   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:37.275224   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:37.277334   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:37.277349   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:37.277355   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:37.277361   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:37.277368   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:37.277376   30218 round_trippers.go:580]     Content-Length: 4082
	I1207 20:28:37.277387   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:37 GMT
	I1207 20:28:37.277399   30218 round_trippers.go:580]     Audit-Id: 3a3269fe-77c6-42b6-aa2f-a4a6ee19d66f
	I1207 20:28:37.277408   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:37.277494   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"467","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1207 20:28:37.778541   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:37.778572   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:37.778584   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:37.778592   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:37.781114   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:37.781135   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:37.781141   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:37 GMT
	I1207 20:28:37.781146   30218 round_trippers.go:580]     Audit-Id: ac478af8-28e9-43c6-90b9-77cdaccd0ba2
	I1207 20:28:37.781151   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:37.781156   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:37.781161   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:37.781167   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:37.781316   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:38.278557   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:38.278594   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:38.278608   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:38.278636   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:38.281417   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:38.281437   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:38.281444   30218 round_trippers.go:580]     Audit-Id: 2e1ad754-10c8-48a6-9a1e-ea1ef76412d7
	I1207 20:28:38.281450   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:38.281455   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:38.281460   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:38.281465   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:38.281470   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:38 GMT
	I1207 20:28:38.281973   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:38.778728   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:38.778757   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:38.778769   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:38.778780   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:38.781463   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:38.781482   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:38.781492   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:38.781504   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:38.781512   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:38 GMT
	I1207 20:28:38.781524   30218 round_trippers.go:580]     Audit-Id: e1b766f4-7779-400e-a2d1-77faa903c27e
	I1207 20:28:38.781534   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:38.781544   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:38.781666   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:39.278511   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:39.278533   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:39.278541   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:39.278547   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:39.281535   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:39.281562   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:39.281571   30218 round_trippers.go:580]     Audit-Id: fabcb301-afdd-4fd1-a9f6-fc126869c692
	I1207 20:28:39.281580   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:39.281589   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:39.281598   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:39.281606   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:39.281631   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:39 GMT
	I1207 20:28:39.282323   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:39.282636   30218 node_ready.go:58] node "multinode-660958-m02" has status "Ready":"False"
	I1207 20:28:39.778035   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:39.778069   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:39.778082   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:39.778090   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:39.783715   30218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1207 20:28:39.783741   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:39.783752   30218 round_trippers.go:580]     Audit-Id: ae6c4cba-c814-4a2b-9097-58343bd7b1e6
	I1207 20:28:39.783762   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:39.783769   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:39.783777   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:39.783785   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:39.783798   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:39 GMT
	I1207 20:28:39.784946   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:40.278636   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:40.278663   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:40.278671   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:40.278677   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:40.281203   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:40.281228   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:40.281238   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:40.281245   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:40.281253   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:40 GMT
	I1207 20:28:40.281261   30218 round_trippers.go:580]     Audit-Id: 62ea5ab4-4817-44c2-8d3f-e5e09f323e31
	I1207 20:28:40.281272   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:40.281280   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:40.281427   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:40.778003   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:40.778027   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:40.778035   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:40.778041   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:40.781401   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:40.781426   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:40.781436   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:40.781444   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:40.781455   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:40.781464   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:40.781473   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:40 GMT
	I1207 20:28:40.781482   30218 round_trippers.go:580]     Audit-Id: 43524d49-ecc7-4f5b-9e3d-629bc2fe64da
	I1207 20:28:40.781669   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:41.278087   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:41.278110   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:41.278121   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:41.278128   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:41.282626   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:28:41.282656   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:41.282666   30218 round_trippers.go:580]     Audit-Id: 3704e799-5550-4ad9-965c-5192e30d45a9
	I1207 20:28:41.282675   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:41.282683   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:41.282691   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:41.282699   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:41.282707   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:41 GMT
	I1207 20:28:41.283749   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:41.284092   30218 node_ready.go:58] node "multinode-660958-m02" has status "Ready":"False"
	I1207 20:28:41.778370   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:41.778393   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:41.778401   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:41.778407   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:41.781218   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:41.781243   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:41.781253   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:41.781262   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:41.781269   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:41 GMT
	I1207 20:28:41.781277   30218 round_trippers.go:580]     Audit-Id: 0efc98dd-59b7-4cb1-8510-9f069992d8d5
	I1207 20:28:41.781285   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:41.781292   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:41.781498   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:42.278034   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:42.278061   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:42.278070   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:42.278076   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:42.281368   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:42.281387   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:42.281394   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:42.281399   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:42.281404   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:42 GMT
	I1207 20:28:42.281409   30218 round_trippers.go:580]     Audit-Id: 3ae6ce59-891f-4d52-bd26-d24498d352f4
	I1207 20:28:42.281415   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:42.281420   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:42.281641   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:42.778100   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:42.778122   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:42.778130   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:42.778136   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:42.780218   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:42.780241   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:42.780248   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:42.780254   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:42.780259   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:42 GMT
	I1207 20:28:42.780265   30218 round_trippers.go:580]     Audit-Id: 61e4cc35-062a-4346-a6d8-816ca93ee070
	I1207 20:28:42.780272   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:42.780280   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:42.780435   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:43.278537   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:43.278562   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:43.278570   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:43.278576   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:43.281237   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:43.281254   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:43.281260   30218 round_trippers.go:580]     Audit-Id: 082e6fdc-b167-459d-a89a-f90e3545da60
	I1207 20:28:43.281266   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:43.281277   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:43.281301   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:43.281313   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:43.281321   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:43 GMT
	I1207 20:28:43.281636   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:43.778089   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:43.778119   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:43.778129   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:43.778137   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:43.781128   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:43.781149   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:43.781156   30218 round_trippers.go:580]     Audit-Id: f5318ff6-11f9-4210-bd1e-be8cfd193b9d
	I1207 20:28:43.781162   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:43.781167   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:43.781172   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:43.781177   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:43.781182   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:43 GMT
	I1207 20:28:43.781342   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:43.781692   30218 node_ready.go:58] node "multinode-660958-m02" has status "Ready":"False"
	I1207 20:28:44.278276   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:44.278303   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:44.278316   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:44.278325   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:44.280927   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:44.280952   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:44.280959   30218 round_trippers.go:580]     Audit-Id: 0cf0b9b4-454d-4eaf-a320-9fb9dfe1df7e
	I1207 20:28:44.280964   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:44.280970   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:44.280978   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:44.280987   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:44.280998   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:44 GMT
	I1207 20:28:44.281256   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:44.778952   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:44.778976   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:44.778991   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:44.779000   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:44.781709   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:44.781734   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:44.781743   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:44.781751   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:44 GMT
	I1207 20:28:44.781757   30218 round_trippers.go:580]     Audit-Id: a3efb7f6-5a74-44ae-a54c-0305ff7f4881
	I1207 20:28:44.781764   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:44.781770   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:44.781778   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:44.782123   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:45.278825   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:45.278850   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:45.278857   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:45.278863   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:45.281659   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:45.281676   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:45.281697   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:45.281703   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:45 GMT
	I1207 20:28:45.281708   30218 round_trippers.go:580]     Audit-Id: b7c50db0-b4ad-4c08-83e7-b7c2d341d8ef
	I1207 20:28:45.281713   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:45.281718   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:45.281723   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:45.282176   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:45.778487   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:45.778505   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:45.778514   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:45.778520   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:45.782575   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:28:45.782592   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:45.782607   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:45.782613   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:45.782619   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:45.782627   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:45 GMT
	I1207 20:28:45.782641   30218 round_trippers.go:580]     Audit-Id: 326bd90a-56e4-4e1e-96d0-7478cbe98912
	I1207 20:28:45.782653   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:45.782976   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:45.783308   30218 node_ready.go:58] node "multinode-660958-m02" has status "Ready":"False"
	I1207 20:28:46.278692   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:46.278712   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:46.278721   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:46.278727   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:46.282092   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:46.282113   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:46.282120   30218 round_trippers.go:580]     Audit-Id: fd1d18f4-911c-49be-bebe-04b0a8f2b074
	I1207 20:28:46.282128   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:46.282136   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:46.282144   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:46.282153   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:46.282163   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:46 GMT
	I1207 20:28:46.282373   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"469","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1207 20:28:46.778027   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:46.778062   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:46.778074   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:46.778083   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:46.781287   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:46.781306   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:46.781315   30218 round_trippers.go:580]     Audit-Id: 40c21f61-f18f-4862-a6ef-8aebba573856
	I1207 20:28:46.781322   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:46.781329   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:46.781336   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:46.781342   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:46.781348   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:46 GMT
	I1207 20:28:46.781688   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"486","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3436 chars]
	I1207 20:28:47.278056   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:47.278094   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:47.278102   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:47.278108   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:47.280848   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:47.280873   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:47.280883   30218 round_trippers.go:580]     Audit-Id: 4664dd44-2d9d-4b45-aaf0-4a0b7e822496
	I1207 20:28:47.280891   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:47.280899   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:47.280908   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:47.280917   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:47.280926   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:47 GMT
	I1207 20:28:47.281110   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"486","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3436 chars]
	I1207 20:28:47.778834   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:47.778865   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:47.778877   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:47.778887   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:47.782053   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:47.782088   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:47.782099   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:47.782107   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:47.782116   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:47 GMT
	I1207 20:28:47.782124   30218 round_trippers.go:580]     Audit-Id: 59f07a7e-fef5-4936-b930-072d6e96b4bd
	I1207 20:28:47.782133   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:47.782140   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:47.782282   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"486","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3436 chars]
	I1207 20:28:48.278647   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:48.278675   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:48.278682   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:48.278688   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:48.281673   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:48.281691   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:48.281698   30218 round_trippers.go:580]     Audit-Id: 6586bb4c-8006-4b96-acde-ad018ef89559
	I1207 20:28:48.281703   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:48.281708   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:48.281713   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:48.281718   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:48.281723   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:48 GMT
	I1207 20:28:48.281989   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"486","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3436 chars]
	I1207 20:28:48.282295   30218 node_ready.go:58] node "multinode-660958-m02" has status "Ready":"False"
	I1207 20:28:48.778737   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:48.778764   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:48.778777   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:48.778787   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:48.781645   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:48.781667   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:48.781674   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:48.781686   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:48.781705   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:48 GMT
	I1207 20:28:48.781719   30218 round_trippers.go:580]     Audit-Id: 1ce23d3e-319a-4850-9e0b-3eed43556fce
	I1207 20:28:48.781728   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:48.781736   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:48.782183   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"486","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3436 chars]
	I1207 20:28:49.278177   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:49.278199   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.278225   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.278231   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.280973   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.280997   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.281004   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.281009   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.281014   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.281019   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.281024   30218 round_trippers.go:580]     Audit-Id: 4dd43de9-3667-4086-9dd2-09941b822d48
	I1207 20:28:49.281029   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.281750   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"498","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1207 20:28:49.282163   30218 node_ready.go:49] node "multinode-660958-m02" has status "Ready":"True"
	I1207 20:28:49.282181   30218 node_ready.go:38] duration metric: took 12.010457395s waiting for node "multinode-660958-m02" to be "Ready" ...
	I1207 20:28:49.282191   30218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:28:49.282295   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:28:49.282303   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.282313   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.282323   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.286377   30218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:28:49.286405   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.286416   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.286428   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.286438   30218 round_trippers.go:580]     Audit-Id: d7e33031-13e3-48a5-97fd-35c3a9794e4b
	I1207 20:28:49.286448   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.286468   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.286481   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.288232   30218 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"403","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I1207 20:28:49.290232   30218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.290303   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:28:49.290317   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.290324   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.290330   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.292589   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.292608   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.292617   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.292625   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.292637   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.292646   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.292657   30218 round_trippers.go:580]     Audit-Id: 5df2c2db-5508-4e0f-9705-4d69a04fe587
	I1207 20:28:49.292668   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.292969   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"403","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1207 20:28:49.293324   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.293335   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.293342   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.293348   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.295570   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.295590   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.295599   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.295609   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.295618   30218 round_trippers.go:580]     Audit-Id: b17310d1-6dd4-44d6-9e4c-1c15e31e9499
	I1207 20:28:49.295626   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.295637   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.295646   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.295831   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:49.296135   30218 pod_ready.go:92] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:49.296149   30218 pod_ready.go:81] duration metric: took 5.899195ms waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.296156   30218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.296208   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:28:49.296219   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.296229   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.296239   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.298270   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.298289   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.298299   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.298308   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.298315   30218 round_trippers.go:580]     Audit-Id: d1b5c7c8-7513-4dbd-b5b8-757be55c96c1
	I1207 20:28:49.298320   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.298328   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.298333   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.298440   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"356","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1207 20:28:49.298871   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.298889   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.298900   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.298909   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.300795   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:28:49.300809   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.300815   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.300820   30218 round_trippers.go:580]     Audit-Id: 06f6b74a-3d7a-4235-8f93-329c5ee6015a
	I1207 20:28:49.300825   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.300831   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.300839   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.300848   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.301104   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:49.301352   30218 pod_ready.go:92] pod "etcd-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:49.301364   30218 pod_ready.go:81] duration metric: took 5.201605ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.301376   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.301411   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:28:49.301418   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.301424   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.301430   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.303358   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:28:49.303378   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.303387   30218 round_trippers.go:580]     Audit-Id: 885bc992-41a4-496b-bb78-8b6e2a02c523
	I1207 20:28:49.303396   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.303405   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.303413   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.303423   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.303431   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.303647   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"280","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1207 20:28:49.304099   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.304121   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.304131   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.304141   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.305878   30218 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:28:49.305896   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.305905   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.305910   30218 round_trippers.go:580]     Audit-Id: d6ed46c7-d2b4-4575-b8d0-f844e69386d4
	I1207 20:28:49.305915   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.305934   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.305942   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.305950   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.306227   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:49.306479   30218 pod_ready.go:92] pod "kube-apiserver-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:49.306493   30218 pod_ready.go:81] duration metric: took 5.110745ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.306505   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.306558   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:28:49.306568   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.306578   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.306588   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.309531   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.309543   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.309548   30218 round_trippers.go:580]     Audit-Id: db105bff-ef46-4f9e-935e-ffe7d0f11386
	I1207 20:28:49.309553   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.309558   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.309563   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.309569   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.309588   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.309877   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"359","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1207 20:28:49.310243   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.310256   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.310262   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.310269   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.313285   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:49.313302   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.313310   30218 round_trippers.go:580]     Audit-Id: 7313ebd9-b421-45e5-bedc-cc19fc8d4869
	I1207 20:28:49.313318   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.313327   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.313337   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.313353   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.313362   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.313481   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:49.313738   30218 pod_ready.go:92] pod "kube-controller-manager-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:49.313754   30218 pod_ready.go:81] duration metric: took 7.239924ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.313766   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.479089   30218 request.go:629] Waited for 165.269344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:28:49.479160   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:28:49.479168   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.479178   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.479187   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.481754   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.481774   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.481784   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.481792   30218 round_trippers.go:580]     Audit-Id: 598a17fe-2325-4079-b45e-978cb3a55b04
	I1207 20:28:49.481799   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.481807   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.481815   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.481827   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.482134   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"373","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:28:49.678905   30218 request.go:629] Waited for 196.379876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.678987   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:49.678999   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.679010   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.679024   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.681698   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.681718   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.681724   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.681730   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.681735   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.681740   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.681745   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.681750   30218 round_trippers.go:580]     Audit-Id: a52dd144-b95f-4dcf-81f1-69c9c3ea3094
	I1207 20:28:49.682058   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:49.682347   30218 pod_ready.go:92] pod "kube-proxy-pfc45" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:49.682359   30218 pod_ready.go:81] duration metric: took 368.586564ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.682368   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:49.878760   30218 request.go:629] Waited for 196.335904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:28:49.878822   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:28:49.878828   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:49.878835   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:49.878842   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:49.881266   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:49.881283   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:49.881290   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:49 GMT
	I1207 20:28:49.881295   30218 round_trippers.go:580]     Audit-Id: 8ed722bd-d0c0-4d42-9099-df5869617fa9
	I1207 20:28:49.881300   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:49.881306   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:49.881311   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:49.881316   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:49.881620   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxqfp","generateName":"kube-proxy-","namespace":"kube-system","uid":"c06f17e2-4050-4554-8c4a-057bca0bb5ff","resourceVersion":"481","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:28:50.078309   30218 request.go:629] Waited for 196.286087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:50.078374   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:28:50.078378   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:50.078388   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:50.078394   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:50.080722   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:50.080742   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:50.080749   30218 round_trippers.go:580]     Audit-Id: aff90b58-b327-45e2-851d-632c11f8563a
	I1207 20:28:50.080755   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:50.080760   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:50.080766   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:50.080771   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:50.080777   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:50 GMT
	I1207 20:28:50.081070   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"498","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_28_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1207 20:28:50.081353   30218 pod_ready.go:92] pod "kube-proxy-rxqfp" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:50.081374   30218 pod_ready.go:81] duration metric: took 398.996836ms waiting for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:50.081387   30218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:50.278820   30218 request.go:629] Waited for 197.376794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:28:50.278889   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:28:50.278895   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:50.278917   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:50.278926   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:50.281641   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:50.281665   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:50.281674   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:50.281682   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:50.281689   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:50 GMT
	I1207 20:28:50.281697   30218 round_trippers.go:580]     Audit-Id: e1adbdfb-d81d-47c3-bd56-3c81ffcc7c16
	I1207 20:28:50.281704   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:50.281711   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:50.281975   30218 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"279","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1207 20:28:50.478683   30218 request.go:629] Waited for 196.341434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:50.478742   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:28:50.478747   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:50.478754   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:50.478761   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:50.481724   30218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:28:50.481750   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:50.481759   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:50.481767   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:50 GMT
	I1207 20:28:50.481775   30218 round_trippers.go:580]     Audit-Id: 30539484-d14b-4615-b055-644ee9caccd2
	I1207 20:28:50.481783   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:50.481796   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:50.481807   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:50.482190   30218 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1207 20:28:50.482501   30218 pod_ready.go:92] pod "kube-scheduler-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:28:50.482516   30218 pod_ready.go:81] duration metric: took 401.121432ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:28:50.482529   30218 pod_ready.go:38] duration metric: took 1.200327091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:28:50.482546   30218 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:28:50.482598   30218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:28:50.496156   30218 system_svc.go:56] duration metric: took 13.603951ms WaitForService to wait for kubelet.
	I1207 20:28:50.496181   30218 kubeadm.go:581] duration metric: took 13.243078207s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:28:50.496202   30218 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:28:50.678653   30218 request.go:629] Waited for 182.385034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1207 20:28:50.678717   30218 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:28:50.678724   30218 round_trippers.go:469] Request Headers:
	I1207 20:28:50.678732   30218 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:28:50.678740   30218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:28:50.682658   30218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:28:50.682676   30218 round_trippers.go:577] Response Headers:
	I1207 20:28:50.682683   30218 round_trippers.go:580]     Audit-Id: 1820818d-eb42-426f-8315-1fc93df852f0
	I1207 20:28:50.682688   30218 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:28:50.682694   30218 round_trippers.go:580]     Content-Type: application/json
	I1207 20:28:50.682699   30218 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:28:50.682704   30218 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:28:50.682709   30218 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:28:50 GMT
	I1207 20:28:50.683318   30218 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"383","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10196 chars]
	I1207 20:28:50.683747   30218 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:28:50.683765   30218 node_conditions.go:123] node cpu capacity is 2
	I1207 20:28:50.683774   30218 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:28:50.683778   30218 node_conditions.go:123] node cpu capacity is 2
	I1207 20:28:50.683785   30218 node_conditions.go:105] duration metric: took 187.576607ms to run NodePressure ...
	I1207 20:28:50.683804   30218 start.go:228] waiting for startup goroutines ...
	I1207 20:28:50.683828   30218 start.go:242] writing updated cluster config ...
	I1207 20:28:50.684118   30218 ssh_runner.go:195] Run: rm -f paused
	I1207 20:28:50.729113   30218 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 20:28:50.731964   30218 out.go:177] * Done! kubectl is now configured to use "multinode-660958" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 20:27:01 UTC, ends at Thu 2023-12-07 20:28:58 UTC. --
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.543837483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980938543740438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=79a5927f-2666-4cc6-a204-fc77efc7e480 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.544378988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=37fa1bb2-bd8d-4b30-84c1-11d02576b2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.544423395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=37fa1bb2-bd8d-4b30-84c1-11d02576b2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.544635562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:551b2ba8f1f407fb9abe340034d353597582f5c40f97f8a29065d1f95ab2f89c,PodSandboxId:6b4e1b0ac01cb65d3b5b55d3502950202a57a8348a8a8ba88569d5c426c5cbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701980934784254685,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e416091b66bbe5405c791ef9b451144ede92c198b4e5d86d89b20655b57cb9c1,PodSandboxId:de5592c5eaa8d26c847b3f74ceba7f6f9f0f52530cb4d71ffae133697b1ecedc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701980875392939601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8924a9e0967b28e7b9e92f590c625750fd69d73f95c9a0f6d28a8cfec570d6,PodSandboxId:88c112960d564623d39c88d820cd6b9e12926d302084f42b7b37d8e10daf38f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980874262093318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244d4fc421e16a69243fec7bda9c69bb263d905c2db381e166d3359ad695c076,PodSandboxId:0f5879dda3c6d8371810013f767521a095299f116af060e2d4e2565174f79e89,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701980871903535313,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a1e1b036c9b58f4b650a99aef8983904d507d294c3822e259a2f4988fd2fb7,PodSandboxId:bd2d9501280c5d95e4d0a904dea50e0cf01802f67fae8156125747b41157e748,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701980869421828077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8d316d47c88d7512084e8db79216bd0a8d2a8ddb0320fed17b157f15e0e2ca,PodSandboxId:2318a5f957a8239e42b72698209f0e08167bbd6a95f5a6df14d3fbab77106f49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701980848720746071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.
container.hash: 9af16bb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f417b72d47e3920ba768af2a4a88b0bd46195b7d7faeeb29ffd9e0a29391c09,PodSandboxId:4ad1d47bd00b5d5afb2257b8f8cdf77dab408e2c7228247179f41b2c36cbe795,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701980848576918188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd178fae9e64a577638906c2b72ccd44ff6be3203aed9b436b91ed1840d2a095,PodSandboxId:26f7a9e11005aee0ef9bf69f0149a851956b1fb72c079c3f2780d118210e44ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701980848350895500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c,PodSandboxId:d39dcfd4fe2407d044b0794761d62c08f341a2c16f19eb32d5627040fd210a4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701980848175324352,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.
container.hash: 251fd5a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=37fa1bb2-bd8d-4b30-84c1-11d02576b2a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.596691404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4a35fb05-2f16-4cf2-a93b-517fcad1e34f name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.596813877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4a35fb05-2f16-4cf2-a93b-517fcad1e34f name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.598461471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a44f517f-0893-4e38-b72b-811611d2daf4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.598921405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980938598907549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a44f517f-0893-4e38-b72b-811611d2daf4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.599421750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0b507f50-1ad4-4e1b-87b4-53499663be01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.599475143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b507f50-1ad4-4e1b-87b4-53499663be01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.599712304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:551b2ba8f1f407fb9abe340034d353597582f5c40f97f8a29065d1f95ab2f89c,PodSandboxId:6b4e1b0ac01cb65d3b5b55d3502950202a57a8348a8a8ba88569d5c426c5cbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701980934784254685,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e416091b66bbe5405c791ef9b451144ede92c198b4e5d86d89b20655b57cb9c1,PodSandboxId:de5592c5eaa8d26c847b3f74ceba7f6f9f0f52530cb4d71ffae133697b1ecedc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701980875392939601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8924a9e0967b28e7b9e92f590c625750fd69d73f95c9a0f6d28a8cfec570d6,PodSandboxId:88c112960d564623d39c88d820cd6b9e12926d302084f42b7b37d8e10daf38f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980874262093318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244d4fc421e16a69243fec7bda9c69bb263d905c2db381e166d3359ad695c076,PodSandboxId:0f5879dda3c6d8371810013f767521a095299f116af060e2d4e2565174f79e89,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701980871903535313,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a1e1b036c9b58f4b650a99aef8983904d507d294c3822e259a2f4988fd2fb7,PodSandboxId:bd2d9501280c5d95e4d0a904dea50e0cf01802f67fae8156125747b41157e748,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701980869421828077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8d316d47c88d7512084e8db79216bd0a8d2a8ddb0320fed17b157f15e0e2ca,PodSandboxId:2318a5f957a8239e42b72698209f0e08167bbd6a95f5a6df14d3fbab77106f49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701980848720746071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.
container.hash: 9af16bb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f417b72d47e3920ba768af2a4a88b0bd46195b7d7faeeb29ffd9e0a29391c09,PodSandboxId:4ad1d47bd00b5d5afb2257b8f8cdf77dab408e2c7228247179f41b2c36cbe795,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701980848576918188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd178fae9e64a577638906c2b72ccd44ff6be3203aed9b436b91ed1840d2a095,PodSandboxId:26f7a9e11005aee0ef9bf69f0149a851956b1fb72c079c3f2780d118210e44ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701980848350895500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c,PodSandboxId:d39dcfd4fe2407d044b0794761d62c08f341a2c16f19eb32d5627040fd210a4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701980848175324352,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.
container.hash: 251fd5a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b507f50-1ad4-4e1b-87b4-53499663be01 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.641370693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7dddc2eb-0e34-48dd-9525-f0ba518f4e4b name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.641427908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7dddc2eb-0e34-48dd-9525-f0ba518f4e4b name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.642869906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5c832a2a-a170-4c01-8f28-1be523eba5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.643392302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980938643376396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5c832a2a-a170-4c01-8f28-1be523eba5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.644197690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bf739e54-17d6-463c-8641-4d2316880966 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.644244519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bf739e54-17d6-463c-8641-4d2316880966 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.644442686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:551b2ba8f1f407fb9abe340034d353597582f5c40f97f8a29065d1f95ab2f89c,PodSandboxId:6b4e1b0ac01cb65d3b5b55d3502950202a57a8348a8a8ba88569d5c426c5cbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701980934784254685,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e416091b66bbe5405c791ef9b451144ede92c198b4e5d86d89b20655b57cb9c1,PodSandboxId:de5592c5eaa8d26c847b3f74ceba7f6f9f0f52530cb4d71ffae133697b1ecedc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701980875392939601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8924a9e0967b28e7b9e92f590c625750fd69d73f95c9a0f6d28a8cfec570d6,PodSandboxId:88c112960d564623d39c88d820cd6b9e12926d302084f42b7b37d8e10daf38f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980874262093318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244d4fc421e16a69243fec7bda9c69bb263d905c2db381e166d3359ad695c076,PodSandboxId:0f5879dda3c6d8371810013f767521a095299f116af060e2d4e2565174f79e89,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701980871903535313,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a1e1b036c9b58f4b650a99aef8983904d507d294c3822e259a2f4988fd2fb7,PodSandboxId:bd2d9501280c5d95e4d0a904dea50e0cf01802f67fae8156125747b41157e748,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701980869421828077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8d316d47c88d7512084e8db79216bd0a8d2a8ddb0320fed17b157f15e0e2ca,PodSandboxId:2318a5f957a8239e42b72698209f0e08167bbd6a95f5a6df14d3fbab77106f49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701980848720746071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.
container.hash: 9af16bb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f417b72d47e3920ba768af2a4a88b0bd46195b7d7faeeb29ffd9e0a29391c09,PodSandboxId:4ad1d47bd00b5d5afb2257b8f8cdf77dab408e2c7228247179f41b2c36cbe795,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701980848576918188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd178fae9e64a577638906c2b72ccd44ff6be3203aed9b436b91ed1840d2a095,PodSandboxId:26f7a9e11005aee0ef9bf69f0149a851956b1fb72c079c3f2780d118210e44ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701980848350895500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c,PodSandboxId:d39dcfd4fe2407d044b0794761d62c08f341a2c16f19eb32d5627040fd210a4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701980848175324352,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.
container.hash: 251fd5a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bf739e54-17d6-463c-8641-4d2316880966 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.693535244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f85fc23f-9747-43b7-9847-5d2af8f296b0 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.693676508Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f85fc23f-9747-43b7-9847-5d2af8f296b0 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.695548686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e07cdb19-a26e-4a2a-b64a-c996e2df670b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.696288424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701980938696261527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e07cdb19-a26e-4a2a-b64a-c996e2df670b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.697132212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=05da63f4-61d0-4d93-b420-59a2901509a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.697210492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=05da63f4-61d0-4d93-b420-59a2901509a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:28:58 multinode-660958 crio[714]: time="2023-12-07 20:28:58.697542457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:551b2ba8f1f407fb9abe340034d353597582f5c40f97f8a29065d1f95ab2f89c,PodSandboxId:6b4e1b0ac01cb65d3b5b55d3502950202a57a8348a8a8ba88569d5c426c5cbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701980934784254685,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e416091b66bbe5405c791ef9b451144ede92c198b4e5d86d89b20655b57cb9c1,PodSandboxId:de5592c5eaa8d26c847b3f74ceba7f6f9f0f52530cb4d71ffae133697b1ecedc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701980875392939601,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8924a9e0967b28e7b9e92f590c625750fd69d73f95c9a0f6d28a8cfec570d6,PodSandboxId:88c112960d564623d39c88d820cd6b9e12926d302084f42b7b37d8e10daf38f1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701980874262093318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244d4fc421e16a69243fec7bda9c69bb263d905c2db381e166d3359ad695c076,PodSandboxId:0f5879dda3c6d8371810013f767521a095299f116af060e2d4e2565174f79e89,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701980871903535313,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a1e1b036c9b58f4b650a99aef8983904d507d294c3822e259a2f4988fd2fb7,PodSandboxId:bd2d9501280c5d95e4d0a904dea50e0cf01802f67fae8156125747b41157e748,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701980869421828077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8d316d47c88d7512084e8db79216bd0a8d2a8ddb0320fed17b157f15e0e2ca,PodSandboxId:2318a5f957a8239e42b72698209f0e08167bbd6a95f5a6df14d3fbab77106f49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701980848720746071,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.
container.hash: 9af16bb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f417b72d47e3920ba768af2a4a88b0bd46195b7d7faeeb29ffd9e0a29391c09,PodSandboxId:4ad1d47bd00b5d5afb2257b8f8cdf77dab408e2c7228247179f41b2c36cbe795,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701980848576918188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd178fae9e64a577638906c2b72ccd44ff6be3203aed9b436b91ed1840d2a095,PodSandboxId:26f7a9e11005aee0ef9bf69f0149a851956b1fb72c079c3f2780d118210e44ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701980848350895500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c,PodSandboxId:d39dcfd4fe2407d044b0794761d62c08f341a2c16f19eb32d5627040fd210a4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701980848175324352,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.
container.hash: 251fd5a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=05da63f4-61d0-4d93-b420-59a2901509a6 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	551b2ba8f1f40       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   6b4e1b0ac01cb       busybox-5bc68d56bd-jbm9q
	e416091b66bbe       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   de5592c5eaa8d       coredns-5dd5756b68-7mss7
	fb8924a9e0967       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   88c112960d564       storage-provisioner
	244d4fc421e16       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   0f5879dda3c6d       kindnet-jpfqs
	a1a1e1b036c9b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   bd2d9501280c5       kube-proxy-pfc45
	af8d316d47c88       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   2318a5f957a82       etcd-multinode-660958
	9f417b72d47e3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   4ad1d47bd00b5       kube-controller-manager-multinode-660958
	cd178fae9e64a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   26f7a9e11005a       kube-scheduler-multinode-660958
	6feb8b3d9d8e6       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   d39dcfd4fe240       kube-apiserver-multinode-660958
	
	* 
	* ==> coredns [e416091b66bbe5405c791ef9b451144ede92c198b4e5d86d89b20655b57cb9c1] <==
	* [INFO] 10.244.0.3:46598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092619s
	[INFO] 10.244.1.2:60227 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116227s
	[INFO] 10.244.1.2:51961 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173718s
	[INFO] 10.244.1.2:46071 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149692s
	[INFO] 10.244.1.2:50222 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095501s
	[INFO] 10.244.1.2:56618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158508s
	[INFO] 10.244.1.2:52721 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084735s
	[INFO] 10.244.1.2:32786 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073019s
	[INFO] 10.244.1.2:33207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070608s
	[INFO] 10.244.0.3:53344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119256s
	[INFO] 10.244.0.3:50896 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078995s
	[INFO] 10.244.0.3:43954 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085874s
	[INFO] 10.244.0.3:59507 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061387s
	[INFO] 10.244.1.2:50041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252798s
	[INFO] 10.244.1.2:34478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115092s
	[INFO] 10.244.1.2:40763 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212241s
	[INFO] 10.244.1.2:45986 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108419s
	[INFO] 10.244.0.3:60563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183386s
	[INFO] 10.244.0.3:39706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120899s
	[INFO] 10.244.0.3:58869 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078404s
	[INFO] 10.244.0.3:56336 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000208713s
	[INFO] 10.244.1.2:47809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000341362s
	[INFO] 10.244.1.2:48199 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108452s
	[INFO] 10.244.1.2:32926 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114166s
	[INFO] 10.244.1.2:46909 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163806s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-660958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-660958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=multinode-660958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_27_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:27:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-660958
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:28:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:27:53 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:27:53 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:27:53 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:27:53 +0000   Thu, 07 Dec 2023 20:27:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    multinode-660958
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 27972dc3ec4347f3b362f8548c92a179
	  System UUID:                27972dc3-ec43-47f3-b362-f8548c92a179
	  Boot ID:                    12054c2a-fb8e-487d-9282-743990f380bf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-jbm9q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-7mss7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     70s
	  kube-system                 etcd-multinode-660958                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kindnet-jpfqs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      71s
	  kube-system                 kube-apiserver-multinode-660958             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-multinode-660958    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-pfc45                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-multinode-660958             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 69s   kube-proxy       
	  Normal  Starting                 83s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s   kubelet          Node multinode-660958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s   kubelet          Node multinode-660958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s   kubelet          Node multinode-660958 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           71s   node-controller  Node multinode-660958 event: Registered Node multinode-660958 in Controller
	  Normal  NodeReady                65s   kubelet          Node multinode-660958 status is now: NodeReady
	
	
	Name:               multinode-660958-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-660958-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=multinode-660958
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_07T20_28_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:28:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-660958-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:28:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:28:49 +0000   Thu, 07 Dec 2023 20:28:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:28:49 +0000   Thu, 07 Dec 2023 20:28:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:28:49 +0000   Thu, 07 Dec 2023 20:28:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:28:49 +0000   Thu, 07 Dec 2023 20:28:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-660958-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3106d12287b94e3b946de42da9b1f4d9
	  System UUID:                3106d122-87b9-4e3b-946d-e42da9b1f4d9
	  Boot ID:                    33e5ed41-b61b-4407-8aeb-48da19301e90
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-vllfc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-d764j               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23s
	  kube-system                 kube-proxy-rxqfp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  23s (x5 over 25s)  kubelet          Node multinode-660958-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x5 over 25s)  kubelet          Node multinode-660958-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x5 over 25s)  kubelet          Node multinode-660958-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node multinode-660958-m02 event: Registered Node multinode-660958-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-660958-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068626] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.379082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 7 20:27] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151083] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.023296] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.996711] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.103301] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.151766] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.107294] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.213813] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[  +9.378176] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[  +9.276916] systemd-fstab-generator[1255]: Ignoring "noauto" for root device
	[ +20.614054] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [af8d316d47c88d7512084e8db79216bd0a8d2a8ddb0320fed17b157f15e0e2ca] <==
	* {"level":"info","ts":"2023-12-07T20:27:30.524211Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:27:30.524453Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:27:30.524487Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:27:30.523588Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"683e1d26ac7e3123","initial-advertise-peer-urls":["https://192.168.39.19:2380"],"listen-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T20:27:30.523652Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T20:27:30.52496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035)"}
	{"level":"info","ts":"2023-12-07T20:27:30.52508Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","added-peer-id":"683e1d26ac7e3123","added-peer-peer-urls":["https://192.168.39.19:2380"]}
	{"level":"info","ts":"2023-12-07T20:27:30.880882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-07T20:27:30.880987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T20:27:30.881022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 1"}
	{"level":"info","ts":"2023-12-07T20:27:30.881052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:27:30.881076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 2"}
	{"level":"info","ts":"2023-12-07T20:27:30.881103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T20:27:30.881128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 2"}
	{"level":"info","ts":"2023-12-07T20:27:30.883011Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:27:30.883883Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:multinode-660958 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:27:30.883932Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:27:30.884449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:27:30.884566Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:27:30.884684Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:27:30.884825Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:27:30.88496Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:27:30.884986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T20:27:30.88501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:27:30.886287Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	
	* 
	* ==> kernel <==
	*  20:28:59 up 2 min,  0 users,  load average: 0.57, 0.35, 0.13
	Linux multinode-660958 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [244d4fc421e16a69243fec7bda9c69bb263d905c2db381e166d3359ad695c076] <==
	* I1207 20:27:52.671300       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1207 20:27:52.671459       1 main.go:107] hostIP = 192.168.39.19
	podIP = 192.168.39.19
	I1207 20:27:52.671683       1 main.go:116] setting mtu 1500 for CNI 
	I1207 20:27:52.671724       1 main.go:146] kindnetd IP family: "ipv4"
	I1207 20:27:52.671846       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1207 20:27:53.264566       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:27:53.264651       1 main.go:227] handling current node
	I1207 20:28:03.280647       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:03.280745       1 main.go:227] handling current node
	I1207 20:28:13.293042       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:13.293139       1 main.go:227] handling current node
	I1207 20:28:23.298118       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:23.298206       1 main.go:227] handling current node
	I1207 20:28:33.303549       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:33.303598       1 main.go:227] handling current node
	I1207 20:28:43.308581       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:43.308673       1 main.go:227] handling current node
	I1207 20:28:43.308697       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:28:43.308715       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	I1207 20:28:43.308943       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.69 Flags: [] Table: 0} 
	I1207 20:28:53.320575       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:28:53.320671       1 main.go:227] handling current node
	I1207 20:28:53.320704       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:28:53.320723       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c] <==
	* I1207 20:27:32.346349       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 20:27:32.388679       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:27:32.405988       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 20:27:32.406155       1 aggregator.go:166] initial CRD sync complete...
	I1207 20:27:32.406257       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 20:27:32.406282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 20:27:32.406416       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:27:32.406224       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1207 20:27:32.407843       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1207 20:27:32.407880       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 20:27:33.218418       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 20:27:33.224251       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 20:27:33.224311       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 20:27:33.872927       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:27:33.918853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 20:27:34.029577       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 20:27:34.036675       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I1207 20:27:34.037653       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 20:27:34.041875       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:27:34.285260       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 20:27:35.659853       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 20:27:35.679006       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 20:27:35.698455       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 20:27:47.652008       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1207 20:27:47.862064       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [9f417b72d47e3920ba768af2a4a88b0bd46195b7d7faeeb29ffd9e0a29391c09] <==
	* I1207 20:27:53.666636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.552µs"
	I1207 20:27:56.043522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.238µs"
	I1207 20:27:56.093913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.240402ms"
	I1207 20:27:56.095571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.355µs"
	I1207 20:27:57.281734       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1207 20:28:36.373584       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-660958-m02\" does not exist"
	I1207 20:28:36.400965       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-660958-m02" podCIDRs=["10.244.1.0/24"]
	I1207 20:28:36.405076       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d764j"
	I1207 20:28:36.405120       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rxqfp"
	I1207 20:28:37.289684       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-660958-m02"
	I1207 20:28:37.290134       1 event.go:307] "Event occurred" object="multinode-660958-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-660958-m02 event: Registered Node multinode-660958-m02 in Controller"
	I1207 20:28:49.120896       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m02"
	I1207 20:28:51.376299       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1207 20:28:51.388814       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-vllfc"
	I1207 20:28:51.397489       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-jbm9q"
	I1207 20:28:51.417176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.806407ms"
	I1207 20:28:51.437543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.258523ms"
	I1207 20:28:51.437672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.488µs"
	I1207 20:28:51.455295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.906µs"
	I1207 20:28:51.460648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.302µs"
	I1207 20:28:52.316323       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-vllfc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-vllfc"
	I1207 20:28:55.145268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.259848ms"
	I1207 20:28:55.145976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.782µs"
	I1207 20:28:55.244545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.340814ms"
	I1207 20:28:55.245487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="179.301µs"
	
	* 
	* ==> kube-proxy [a1a1e1b036c9b58f4b650a99aef8983904d507d294c3822e259a2f4988fd2fb7] <==
	* I1207 20:27:49.634077       1 server_others.go:69] "Using iptables proxy"
	I1207 20:27:49.649210       1 node.go:141] Successfully retrieved node IP: 192.168.39.19
	I1207 20:27:49.699929       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:27:49.699975       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:27:49.702838       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:27:49.702893       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:27:49.703136       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:27:49.703171       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:27:49.704719       1 config.go:188] "Starting service config controller"
	I1207 20:27:49.704985       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:27:49.705016       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:27:49.705046       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:27:49.705535       1 config.go:315] "Starting node config controller"
	I1207 20:27:49.705573       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:27:49.805486       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:27:49.805543       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:27:49.805606       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [cd178fae9e64a577638906c2b72ccd44ff6be3203aed9b436b91ed1840d2a095] <==
	* W1207 20:27:32.344014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:27:32.344040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:27:32.344070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:27:32.344102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 20:27:32.347201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:27:32.347252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 20:27:33.155537       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:27:33.155567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:27:33.169657       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:27:33.169948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 20:27:33.180198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:27:33.180221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 20:27:33.197956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:27:33.198008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1207 20:27:33.508630       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:27:33.508855       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:27:33.568512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:27:33.568654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 20:27:33.592023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:27:33.592123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:27:33.666260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 20:27:33.666588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 20:27:33.668103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:27:33.668150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1207 20:27:35.827620       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:27:01 UTC, ends at Thu 2023-12-07 20:28:59 UTC. --
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.122648    1265 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.122693    1265 projected.go:198] Error preparing data for projected volume kube-api-access-xqwk7 for pod kube-system/kindnet-jpfqs: configmap "kube-root-ca.crt" not found
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.122920    1265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/158552a2-294c-4d08-81de-05b1daf7dfe1-kube-api-access-xqwk7 podName:158552a2-294c-4d08-81de-05b1daf7dfe1 nodeName:}" failed. No retries permitted until 2023-12-07 20:27:48.62274924 +0000 UTC m=+12.983049778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xqwk7" (UniqueName: "kubernetes.io/projected/158552a2-294c-4d08-81de-05b1daf7dfe1-kube-api-access-xqwk7") pod "kindnet-jpfqs" (UID: "158552a2-294c-4d08-81de-05b1daf7dfe1") : configmap "kube-root-ca.crt" not found
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.135947    1265 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.135979    1265 projected.go:198] Error preparing data for projected volume kube-api-access-hzprs for pod kube-system/kube-proxy-pfc45: configmap "kube-root-ca.crt" not found
	Dec 07 20:27:48 multinode-660958 kubelet[1265]: E1207 20:27:48.136101    1265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e39fc15-3b2e-418c-92f1-32570e3bd853-kube-api-access-hzprs podName:1e39fc15-3b2e-418c-92f1-32570e3bd853 nodeName:}" failed. No retries permitted until 2023-12-07 20:27:48.636016924 +0000 UTC m=+12.996317462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hzprs" (UniqueName: "kubernetes.io/projected/1e39fc15-3b2e-418c-92f1-32570e3bd853-kube-api-access-hzprs") pod "kube-proxy-pfc45" (UID: "1e39fc15-3b2e-418c-92f1-32570e3bd853") : configmap "kube-root-ca.crt" not found
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.067350    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pfc45" podStartSLOduration=6.067300995 podCreationTimestamp="2023-12-07 20:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:27:50.016039726 +0000 UTC m=+14.376340285" watchObservedRunningTime="2023-12-07 20:27:53.067300995 +0000 UTC m=+17.427601530"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.536169    1265 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.611046    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-jpfqs" podStartSLOduration=6.611006899 podCreationTimestamp="2023-12-07 20:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:27:53.067625688 +0000 UTC m=+17.427926223" watchObservedRunningTime="2023-12-07 20:27:53.611006899 +0000 UTC m=+17.971307433"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.611175    1265 topology_manager.go:215] "Topology Admit Handler" podUID="6d6632ea-9aae-43e7-8b17-56399870082b" podNamespace="kube-system" podName="coredns-5dd5756b68-7mss7"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: W1207 20:27:53.616509    1265 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-660958" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-660958' and this object
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: E1207 20:27:53.616565    1265 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-660958" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-660958' and this object
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.619907    1265 topology_manager.go:215] "Topology Admit Handler" podUID="48bcf9dc-632d-4f04-9f6a-04d31cef5d88" podNamespace="kube-system" podName="storage-provisioner"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.626123    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d6632ea-9aae-43e7-8b17-56399870082b-config-volume\") pod \"coredns-5dd5756b68-7mss7\" (UID: \"6d6632ea-9aae-43e7-8b17-56399870082b\") " pod="kube-system/coredns-5dd5756b68-7mss7"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.626184    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tql\" (UniqueName: \"kubernetes.io/projected/6d6632ea-9aae-43e7-8b17-56399870082b-kube-api-access-c2tql\") pod \"coredns-5dd5756b68-7mss7\" (UID: \"6d6632ea-9aae-43e7-8b17-56399870082b\") " pod="kube-system/coredns-5dd5756b68-7mss7"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.727121    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48bcf9dc-632d-4f04-9f6a-04d31cef5d88-tmp\") pod \"storage-provisioner\" (UID: \"48bcf9dc-632d-4f04-9f6a-04d31cef5d88\") " pod="kube-system/storage-provisioner"
	Dec 07 20:27:53 multinode-660958 kubelet[1265]: I1207 20:27:53.727164    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpgm\" (UniqueName: \"kubernetes.io/projected/48bcf9dc-632d-4f04-9f6a-04d31cef5d88-kube-api-access-7jpgm\") pod \"storage-provisioner\" (UID: \"48bcf9dc-632d-4f04-9f6a-04d31cef5d88\") " pod="kube-system/storage-provisioner"
	Dec 07 20:27:55 multinode-660958 kubelet[1265]: I1207 20:27:55.034113    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.034078039 podCreationTimestamp="2023-12-07 20:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:27:55.033405407 +0000 UTC m=+19.393705941" watchObservedRunningTime="2023-12-07 20:27:55.034078039 +0000 UTC m=+19.394378554"
	Dec 07 20:27:56 multinode-660958 kubelet[1265]: I1207 20:27:56.076342    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7mss7" podStartSLOduration=8.076306305 podCreationTimestamp="2023-12-07 20:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:27:56.041955971 +0000 UTC m=+20.402256505" watchObservedRunningTime="2023-12-07 20:27:56.076306305 +0000 UTC m=+20.436606840"
	Dec 07 20:28:35 multinode-660958 kubelet[1265]: E1207 20:28:35.903257    1265 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 20:28:35 multinode-660958 kubelet[1265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 20:28:35 multinode-660958 kubelet[1265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 20:28:35 multinode-660958 kubelet[1265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 20:28:51 multinode-660958 kubelet[1265]: I1207 20:28:51.409051    1265 topology_manager.go:215] "Topology Admit Handler" podUID="c38ee0c6-472e-4db5-bb15-c1f1ce390207" podNamespace="default" podName="busybox-5bc68d56bd-jbm9q"
	Dec 07 20:28:51 multinode-660958 kubelet[1265]: I1207 20:28:51.483949    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5rf\" (UniqueName: \"kubernetes.io/projected/c38ee0c6-472e-4db5-bb15-c1f1ce390207-kube-api-access-bq5rf\") pod \"busybox-5bc68d56bd-jbm9q\" (UID: \"c38ee0c6-472e-4db5-bb15-c1f1ce390207\") " pod="default/busybox-5bc68d56bd-jbm9q"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-660958 -n multinode-660958
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-660958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (688.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-660958
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-660958
E1207 20:31:05.939384   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:31:41.700943   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-660958: exit status 82 (2m1.123863168s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-660958"  ...
	* Stopping node "multinode-660958"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-660958" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-660958 --wait=true -v=8 --alsologtostderr
E1207 20:33:04.746135   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:34:28.941832   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:36:05.939759   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:36:41.700765   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:37:28.984098   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:39:28.943305   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:40:51.986881   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:41:05.939299   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:41:41.700806   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-660958 --wait=true -v=8 --alsologtostderr: (9m24.948883969s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-660958
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-660958 -n multinode-660958
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-660958 logs -n 25: (1.560249823s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile186535973/001/cp-test_multinode-660958-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958:/home/docker/cp-test_multinode-660958-m02_multinode-660958.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958 sudo cat                                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m02_multinode-660958.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03:/home/docker/cp-test_multinode-660958-m02_multinode-660958-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958-m03 sudo cat                                   | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m02_multinode-660958-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp testdata/cp-test.txt                                                | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile186535973/001/cp-test_multinode-660958-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958:/home/docker/cp-test_multinode-660958-m03_multinode-660958.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958 sudo cat                                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m03_multinode-660958.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt                       | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02:/home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958-m02 sudo cat                                   | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-660958 node stop m03                                                          | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	| node    | multinode-660958 node start                                                             | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:30 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-660958                                                                | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:30 UTC |                     |
	| stop    | -p multinode-660958                                                                     | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:30 UTC |                     |
	| start   | -p multinode-660958                                                                     | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:32 UTC | 07 Dec 23 20:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-660958                                                                | multinode-660958 | jenkins | v1.32.0 | 07 Dec 23 20:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:32:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:32:29.342977   33734 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:32:29.343115   33734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:32:29.343123   33734 out.go:309] Setting ErrFile to fd 2...
	I1207 20:32:29.343128   33734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:32:29.343291   33734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:32:29.343828   33734 out.go:303] Setting JSON to false
	I1207 20:32:29.344659   33734 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4495,"bootTime":1701976654,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:32:29.344715   33734 start.go:138] virtualization: kvm guest
	I1207 20:32:29.347297   33734 out.go:177] * [multinode-660958] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:32:29.349030   33734 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:32:29.349033   33734 notify.go:220] Checking for updates...
	I1207 20:32:29.350769   33734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:32:29.352418   33734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:32:29.354027   33734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:32:29.355388   33734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:32:29.356821   33734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:32:29.358538   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:32:29.358616   33734 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:32:29.359023   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:32:29.359081   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:32:29.373596   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I1207 20:32:29.374046   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:32:29.374559   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:32:29.374580   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:32:29.374925   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:32:29.375136   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:32:29.412070   33734 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 20:32:29.413599   33734 start.go:298] selected driver: kvm2
	I1207 20:32:29.413612   33734 start.go:902] validating driver "kvm2" against &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:32:29.413760   33734 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:32:29.414136   33734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:32:29.414214   33734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:32:29.428300   33734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:32:29.428993   33734 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:32:29.429069   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:32:29.429085   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:32:29.429095   33734 start_flags.go:323] config:
	{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:32:29.429335   33734 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:32:29.431357   33734 out.go:177] * Starting control plane node multinode-660958 in cluster multinode-660958
	I1207 20:32:29.432769   33734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:32:29.432797   33734 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:32:29.432815   33734 cache.go:56] Caching tarball of preloaded images
	I1207 20:32:29.432902   33734 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:32:29.432915   33734 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:32:29.433036   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:32:29.433218   33734 start.go:365] acquiring machines lock for multinode-660958: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:32:29.433260   33734 start.go:369] acquired machines lock for "multinode-660958" in 24.489µs
	I1207 20:32:29.433278   33734 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:32:29.433288   33734 fix.go:54] fixHost starting: 
	I1207 20:32:29.433570   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:32:29.433609   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:32:29.446945   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I1207 20:32:29.447371   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:32:29.447840   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:32:29.447868   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:32:29.448228   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:32:29.448428   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:32:29.448602   33734 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:32:29.449891   33734 fix.go:102] recreateIfNeeded on multinode-660958: state=Running err=<nil>
	W1207 20:32:29.449906   33734 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:32:29.452104   33734 out.go:177] * Updating the running kvm2 "multinode-660958" VM ...
	I1207 20:32:29.453635   33734 machine.go:88] provisioning docker machine ...
	I1207 20:32:29.453654   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:32:29.453849   33734 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:32:29.454016   33734 buildroot.go:166] provisioning hostname "multinode-660958"
	I1207 20:32:29.454037   33734 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:32:29.454174   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:32:29.456221   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:32:29.456659   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:32:29.456683   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:32:29.456815   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:32:29.457011   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:32:29.457209   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:32:29.457365   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:32:29.457547   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:32:29.458041   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:32:29.458061   33734 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958 && echo "multinode-660958" | sudo tee /etc/hostname
	I1207 20:32:48.018160   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:32:54.098177   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:32:57.170171   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:03.250259   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:06.322320   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:12.402193   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:15.474182   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:21.554269   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:24.626119   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:30.706166   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:33.778230   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:39.858164   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:42.930220   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:49.010192   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:52.082332   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:33:58.162162   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:01.234219   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:07.314196   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:10.386182   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:16.466196   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:19.538195   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:25.618295   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:28.690209   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:34.770188   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:37.842183   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:43.922224   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:46.994164   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:53.074226   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:34:56.146250   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:02.226194   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:05.298264   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:11.378224   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:14.450107   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:20.530227   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:23.602150   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:29.682170   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:32.754183   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:38.834222   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:41.906166   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:47.986155   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:51.058247   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:35:57.138157   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:00.210221   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:06.290207   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:09.362258   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:15.442181   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:18.514191   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:24.594180   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:27.666209   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:33.750178   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:36.818253   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:42.898160   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:45.970129   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:52.050189   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:36:55.122314   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:01.202233   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:04.274174   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:10.354181   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:13.426217   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:19.506167   33734 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.19:22: connect: no route to host
	I1207 20:37:22.508316   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:37:22.508375   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:22.510379   33734 machine.go:91] provisioned docker machine in 4m53.056727527s
	I1207 20:37:22.510419   33734 fix.go:56] fixHost completed within 4m53.077131333s
	I1207 20:37:22.510429   33734 start.go:83] releasing machines lock for "multinode-660958", held for 4m53.077157248s
	W1207 20:37:22.510451   33734 start.go:694] error starting host: provision: host is not running
	W1207 20:37:22.510539   33734 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 20:37:22.510551   33734 start.go:709] Will try again in 5 seconds ...
	I1207 20:37:27.513530   33734 start.go:365] acquiring machines lock for multinode-660958: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:37:27.513636   33734 start.go:369] acquired machines lock for "multinode-660958" in 69.084µs
	I1207 20:37:27.513660   33734 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:37:27.513670   33734 fix.go:54] fixHost starting: 
	I1207 20:37:27.514068   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:37:27.514096   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:37:27.528400   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1207 20:37:27.528786   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:37:27.529221   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:37:27.529241   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:37:27.529581   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:37:27.529743   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:27.529874   33734 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:37:27.531475   33734 fix.go:102] recreateIfNeeded on multinode-660958: state=Stopped err=<nil>
	I1207 20:37:27.531502   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	W1207 20:37:27.531664   33734 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:37:27.533969   33734 out.go:177] * Restarting existing kvm2 VM for "multinode-660958" ...
	I1207 20:37:27.535700   33734 main.go:141] libmachine: (multinode-660958) Calling .Start
	I1207 20:37:27.535840   33734 main.go:141] libmachine: (multinode-660958) Ensuring networks are active...
	I1207 20:37:27.536549   33734 main.go:141] libmachine: (multinode-660958) Ensuring network default is active
	I1207 20:37:27.536848   33734 main.go:141] libmachine: (multinode-660958) Ensuring network mk-multinode-660958 is active
	I1207 20:37:27.537188   33734 main.go:141] libmachine: (multinode-660958) Getting domain xml...
	I1207 20:37:27.537824   33734 main.go:141] libmachine: (multinode-660958) Creating domain...
	I1207 20:37:28.751134   33734 main.go:141] libmachine: (multinode-660958) Waiting to get IP...
	I1207 20:37:28.752039   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:28.752401   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:28.752464   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:28.752376   34524 retry.go:31] will retry after 230.765705ms: waiting for machine to come up
	I1207 20:37:28.985020   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:28.985521   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:28.985550   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:28.985480   34524 retry.go:31] will retry after 239.92736ms: waiting for machine to come up
	I1207 20:37:29.226929   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:29.227396   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:29.227451   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:29.227321   34524 retry.go:31] will retry after 301.503864ms: waiting for machine to come up
	I1207 20:37:29.530751   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:29.531202   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:29.531233   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:29.531162   34524 retry.go:31] will retry after 500.962759ms: waiting for machine to come up
	I1207 20:37:30.034052   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:30.034578   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:30.034600   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:30.034545   34524 retry.go:31] will retry after 459.152894ms: waiting for machine to come up
	I1207 20:37:30.495165   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:30.495686   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:30.495716   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:30.495634   34524 retry.go:31] will retry after 881.825249ms: waiting for machine to come up
	I1207 20:37:31.378676   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:31.379120   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:31.379145   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:31.379057   34524 retry.go:31] will retry after 1.080646009s: waiting for machine to come up
	I1207 20:37:32.461311   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:32.461705   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:32.461732   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:32.461669   34524 retry.go:31] will retry after 948.668255ms: waiting for machine to come up
	I1207 20:37:33.412386   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:33.412772   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:33.412801   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:33.412741   34524 retry.go:31] will retry after 1.274516346s: waiting for machine to come up
	I1207 20:37:34.688588   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:34.688992   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:34.689034   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:34.688946   34524 retry.go:31] will retry after 1.59185128s: waiting for machine to come up
	I1207 20:37:36.282658   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:36.283167   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:36.283198   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:36.283122   34524 retry.go:31] will retry after 2.379213764s: waiting for machine to come up
	I1207 20:37:38.664146   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:38.664717   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:38.664756   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:38.664654   34524 retry.go:31] will retry after 2.998138908s: waiting for machine to come up
	I1207 20:37:41.666712   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:41.667139   33734 main.go:141] libmachine: (multinode-660958) DBG | unable to find current IP address of domain multinode-660958 in network mk-multinode-660958
	I1207 20:37:41.667160   33734 main.go:141] libmachine: (multinode-660958) DBG | I1207 20:37:41.667101   34524 retry.go:31] will retry after 3.662259364s: waiting for machine to come up
	I1207 20:37:45.330929   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.331506   33734 main.go:141] libmachine: (multinode-660958) Found IP for machine: 192.168.39.19
	I1207 20:37:45.331531   33734 main.go:141] libmachine: (multinode-660958) Reserving static IP address...
	I1207 20:37:45.331548   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has current primary IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.332099   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "multinode-660958", mac: "52:54:00:f5:93:7e", ip: "192.168.39.19"} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.332134   33734 main.go:141] libmachine: (multinode-660958) Reserved static IP address: 192.168.39.19
	I1207 20:37:45.332154   33734 main.go:141] libmachine: (multinode-660958) DBG | skip adding static IP to network mk-multinode-660958 - found existing host DHCP lease matching {name: "multinode-660958", mac: "52:54:00:f5:93:7e", ip: "192.168.39.19"}
	I1207 20:37:45.332168   33734 main.go:141] libmachine: (multinode-660958) Waiting for SSH to be available...
	I1207 20:37:45.332209   33734 main.go:141] libmachine: (multinode-660958) DBG | Getting to WaitForSSH function...
	I1207 20:37:45.334894   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.335357   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.335385   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.335567   33734 main.go:141] libmachine: (multinode-660958) DBG | Using SSH client type: external
	I1207 20:37:45.335590   33734 main.go:141] libmachine: (multinode-660958) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa (-rw-------)
	I1207 20:37:45.335639   33734 main.go:141] libmachine: (multinode-660958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:37:45.335661   33734 main.go:141] libmachine: (multinode-660958) DBG | About to run SSH command:
	I1207 20:37:45.335677   33734 main.go:141] libmachine: (multinode-660958) DBG | exit 0
	I1207 20:37:45.430246   33734 main.go:141] libmachine: (multinode-660958) DBG | SSH cmd err, output: <nil>: 
	I1207 20:37:45.430590   33734 main.go:141] libmachine: (multinode-660958) Calling .GetConfigRaw
	I1207 20:37:45.431196   33734 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:37:45.433802   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.434265   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.434299   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.434526   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:37:45.434710   33734 machine.go:88] provisioning docker machine ...
	I1207 20:37:45.434730   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:45.434931   33734 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:37:45.435076   33734 buildroot.go:166] provisioning hostname "multinode-660958"
	I1207 20:37:45.435093   33734 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:37:45.435245   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:45.437634   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.438003   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.438035   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.438084   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:45.438245   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:45.438374   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:45.438515   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:45.438691   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:37:45.439007   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:37:45.439020   33734 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958 && echo "multinode-660958" | sudo tee /etc/hostname
	I1207 20:37:45.580326   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-660958
	
	I1207 20:37:45.580350   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:45.582965   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.583355   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.583375   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.583509   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:45.583688   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:45.583890   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:45.584034   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:45.584188   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:37:45.584483   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:37:45.584499   33734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-660958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-660958/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-660958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:37:45.719965   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:37:45.719995   33734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:37:45.720012   33734 buildroot.go:174] setting up certificates
	I1207 20:37:45.720021   33734 provision.go:83] configureAuth start
	I1207 20:37:45.720029   33734 main.go:141] libmachine: (multinode-660958) Calling .GetMachineName
	I1207 20:37:45.720338   33734 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:37:45.722859   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.723236   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.723293   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.723423   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:45.725552   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.725991   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:45.726028   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:45.726163   33734 provision.go:138] copyHostCerts
	I1207 20:37:45.726194   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:37:45.726228   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:37:45.726252   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:37:45.726333   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:37:45.726427   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:37:45.726474   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:37:45.726484   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:37:45.726524   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:37:45.726583   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:37:45.726606   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:37:45.726615   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:37:45.726649   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:37:45.726717   33734 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.multinode-660958 san=[192.168.39.19 192.168.39.19 localhost 127.0.0.1 minikube multinode-660958]
	I1207 20:37:46.051111   33734 provision.go:172] copyRemoteCerts
	I1207 20:37:46.051184   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:37:46.051214   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.054223   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.054633   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.054665   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.054808   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.055000   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.055194   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.055293   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:37:46.147251   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:37:46.147335   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:37:46.170390   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:37:46.170456   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1207 20:37:46.192516   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:37:46.192583   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:37:46.214778   33734 provision.go:86] duration metric: configureAuth took 494.745759ms
	I1207 20:37:46.214809   33734 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:37:46.215042   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:37:46.215121   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.217802   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.218195   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.218223   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.218390   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.218576   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.218754   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.218873   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.219044   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:37:46.219378   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:37:46.219397   33734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:37:46.526519   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:37:46.526555   33734 machine.go:91] provisioned docker machine in 1.091830948s
	I1207 20:37:46.526570   33734 start.go:300] post-start starting for "multinode-660958" (driver="kvm2")
	I1207 20:37:46.526583   33734 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:37:46.526603   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:46.526927   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:37:46.526968   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.529737   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.530085   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.530119   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.530290   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.530502   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.530662   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.530868   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:37:46.623698   33734 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:37:46.627710   33734 command_runner.go:130] > NAME=Buildroot
	I1207 20:37:46.627731   33734 command_runner.go:130] > VERSION=2021.02.12-1-ge2b7375-dirty
	I1207 20:37:46.627736   33734 command_runner.go:130] > ID=buildroot
	I1207 20:37:46.627743   33734 command_runner.go:130] > VERSION_ID=2021.02.12
	I1207 20:37:46.627751   33734 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1207 20:37:46.627786   33734 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:37:46.627802   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:37:46.627892   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:37:46.627960   33734 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:37:46.627969   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:37:46.628049   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:37:46.635985   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:37:46.656948   33734 start.go:303] post-start completed in 130.362616ms
	I1207 20:37:46.656973   33734 fix.go:56] fixHost completed within 19.143303686s
	I1207 20:37:46.656991   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.659480   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.659854   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.659884   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.660050   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.660229   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.660441   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.660551   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.660733   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:37:46.661135   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1207 20:37:46.661149   33734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:37:46.790564   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701981466.741131601
	
	I1207 20:37:46.790587   33734 fix.go:206] guest clock: 1701981466.741131601
	I1207 20:37:46.790595   33734 fix.go:219] Guest: 2023-12-07 20:37:46.741131601 +0000 UTC Remote: 2023-12-07 20:37:46.656976421 +0000 UTC m=+317.362697381 (delta=84.15518ms)
	I1207 20:37:46.790617   33734 fix.go:190] guest clock delta is within tolerance: 84.15518ms
	I1207 20:37:46.790635   33734 start.go:83] releasing machines lock for "multinode-660958", held for 19.27697646s
	I1207 20:37:46.790660   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:46.790903   33734 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:37:46.793367   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.793751   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.793780   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.793905   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:46.794430   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:46.794593   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:37:46.794672   33734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:37:46.794709   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.794792   33734 ssh_runner.go:195] Run: cat /version.json
	I1207 20:37:46.794862   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:37:46.797316   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.797667   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.797754   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.797784   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.797872   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.798088   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.798204   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:46.798217   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.798232   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:46.798342   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:37:46.798462   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:37:46.798599   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:37:46.798752   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:37:46.798872   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:37:46.894385   33734 command_runner.go:130] > {"iso_version": "v1.32.1-1701788780-17711", "kicbase_version": "v0.0.42-1701685682-17711", "minikube_version": "v1.32.0", "commit": "3d3a6783269a57f5d9691dd9fa861c5802b7a18b"}
	I1207 20:37:46.894681   33734 ssh_runner.go:195] Run: systemctl --version
	I1207 20:37:46.918220   33734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1207 20:37:46.918262   33734 command_runner.go:130] > systemd 247 (247)
	I1207 20:37:46.918289   33734 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1207 20:37:46.918360   33734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:37:47.064286   33734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:37:47.069857   33734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1207 20:37:47.070081   33734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:37:47.070147   33734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:37:47.085609   33734 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1207 20:37:47.085638   33734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:37:47.085645   33734 start.go:475] detecting cgroup driver to use...
	I1207 20:37:47.085701   33734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:37:47.105385   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:37:47.118263   33734 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:37:47.118333   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:37:47.131423   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:37:47.144603   33734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:37:47.245539   33734 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1207 20:37:47.245664   33734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:37:47.366731   33734 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1207 20:37:47.366767   33734 docker.go:219] disabling docker service ...
	I1207 20:37:47.366818   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:37:47.380175   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:37:47.391888   33734 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1207 20:37:47.391976   33734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:37:47.405507   33734 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1207 20:37:47.506650   33734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:37:47.617543   33734 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1207 20:37:47.617574   33734 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1207 20:37:47.617636   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:37:47.631091   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:37:47.647431   33734 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1207 20:37:47.647508   33734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:37:47.647568   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:37:47.657291   33734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:37:47.657362   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:37:47.666302   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:37:47.675826   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:37:47.685872   33734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:37:47.695729   33734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:37:47.704805   33734 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:37:47.704844   33734 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:37:47.704919   33734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:37:47.717755   33734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:37:47.727704   33734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:37:47.837653   33734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:37:48.009979   33734 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:37:48.010068   33734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:37:48.014864   33734 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1207 20:37:48.014889   33734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1207 20:37:48.014899   33734 command_runner.go:130] > Device: 16h/22d	Inode: 742         Links: 1
	I1207 20:37:48.014909   33734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:37:48.014918   33734 command_runner.go:130] > Access: 2023-12-07 20:37:47.946910444 +0000
	I1207 20:37:48.014928   33734 command_runner.go:130] > Modify: 2023-12-07 20:37:47.946910444 +0000
	I1207 20:37:48.014937   33734 command_runner.go:130] > Change: 2023-12-07 20:37:47.946910444 +0000
	I1207 20:37:48.014942   33734 command_runner.go:130] >  Birth: -
	I1207 20:37:48.014959   33734 start.go:543] Will wait 60s for crictl version
	I1207 20:37:48.015009   33734 ssh_runner.go:195] Run: which crictl
	I1207 20:37:48.018745   33734 command_runner.go:130] > /usr/bin/crictl
	I1207 20:37:48.018847   33734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:37:48.058598   33734 command_runner.go:130] > Version:  0.1.0
	I1207 20:37:48.058626   33734 command_runner.go:130] > RuntimeName:  cri-o
	I1207 20:37:48.058634   33734 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1207 20:37:48.058642   33734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1207 20:37:48.058739   33734 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:37:48.058810   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:37:48.109454   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:37:48.109480   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:37:48.109491   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:37:48.109498   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:37:48.109517   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:37:48.109524   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:37:48.109531   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:37:48.109537   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:37:48.109545   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:37:48.109559   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:37:48.109565   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:37:48.109569   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:37:48.109632   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:37:48.159108   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:37:48.159140   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:37:48.159151   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:37:48.159163   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:37:48.159169   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:37:48.159173   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:37:48.159177   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:37:48.159182   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:37:48.159186   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:37:48.159194   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:37:48.159198   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:37:48.159202   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:37:48.161676   33734 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:37:48.163068   33734 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:37:48.165786   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:48.166214   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:37:48.166236   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:37:48.166439   33734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:37:48.170550   33734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:37:48.182390   33734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:37:48.182447   33734 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:37:48.220038   33734 command_runner.go:130] > {
	I1207 20:37:48.220061   33734 command_runner.go:130] >   "images": [
	I1207 20:37:48.220065   33734 command_runner.go:130] >     {
	I1207 20:37:48.220073   33734 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1207 20:37:48.220077   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:48.220091   33734 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1207 20:37:48.220096   33734 command_runner.go:130] >       ],
	I1207 20:37:48.220102   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:48.220116   33734 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1207 20:37:48.220136   33734 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1207 20:37:48.220143   33734 command_runner.go:130] >       ],
	I1207 20:37:48.220150   33734 command_runner.go:130] >       "size": "750414",
	I1207 20:37:48.220156   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:48.220162   33734 command_runner.go:130] >         "value": "65535"
	I1207 20:37:48.220166   33734 command_runner.go:130] >       },
	I1207 20:37:48.220170   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:48.220175   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:48.220179   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:48.220182   33734 command_runner.go:130] >     }
	I1207 20:37:48.220185   33734 command_runner.go:130] >   ]
	I1207 20:37:48.220189   33734 command_runner.go:130] > }
	I1207 20:37:48.220300   33734 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 20:37:48.220358   33734 ssh_runner.go:195] Run: which lz4
	I1207 20:37:48.224309   33734 command_runner.go:130] > /usr/bin/lz4
	I1207 20:37:48.224340   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1207 20:37:48.224443   33734 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:37:48.228354   33734 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:37:48.228587   33734 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:37:48.228609   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 20:37:50.050261   33734 crio.go:444] Took 1.825869 seconds to copy over tarball
	I1207 20:37:50.050331   33734 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:37:52.916091   33734 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865732856s)
	I1207 20:37:52.916123   33734 crio.go:451] Took 2.865837 seconds to extract the tarball
	I1207 20:37:52.916136   33734 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:37:52.957609   33734 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:37:53.004509   33734 command_runner.go:130] > {
	I1207 20:37:53.004526   33734 command_runner.go:130] >   "images": [
	I1207 20:37:53.004530   33734 command_runner.go:130] >     {
	I1207 20:37:53.004537   33734 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1207 20:37:53.004542   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.004548   33734 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1207 20:37:53.004551   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004555   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.004563   33734 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1207 20:37:53.004586   33734 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1207 20:37:53.004593   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004600   33734 command_runner.go:130] >       "size": "65258016",
	I1207 20:37:53.004608   33734 command_runner.go:130] >       "uid": null,
	I1207 20:37:53.004614   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.004619   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.004627   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.004630   33734 command_runner.go:130] >     },
	I1207 20:37:53.004633   33734 command_runner.go:130] >     {
	I1207 20:37:53.004639   33734 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1207 20:37:53.004644   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.004649   33734 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1207 20:37:53.004655   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004659   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.004670   33734 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1207 20:37:53.004687   33734 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1207 20:37:53.004698   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004713   33734 command_runner.go:130] >       "size": "31470524",
	I1207 20:37:53.004725   33734 command_runner.go:130] >       "uid": null,
	I1207 20:37:53.004733   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.004739   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.004743   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.004749   33734 command_runner.go:130] >     },
	I1207 20:37:53.004753   33734 command_runner.go:130] >     {
	I1207 20:37:53.004761   33734 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1207 20:37:53.004767   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.004774   33734 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1207 20:37:53.004784   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004795   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.004811   33734 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1207 20:37:53.004826   33734 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1207 20:37:53.004835   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004846   33734 command_runner.go:130] >       "size": "53621675",
	I1207 20:37:53.004854   33734 command_runner.go:130] >       "uid": null,
	I1207 20:37:53.004861   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.004865   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.004876   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.004885   33734 command_runner.go:130] >     },
	I1207 20:37:53.004894   33734 command_runner.go:130] >     {
	I1207 20:37:53.004906   33734 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1207 20:37:53.004916   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.004927   33734 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1207 20:37:53.004942   33734 command_runner.go:130] >       ],
	I1207 20:37:53.004953   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.004965   33734 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1207 20:37:53.004976   33734 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1207 20:37:53.004995   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005006   33734 command_runner.go:130] >       "size": "295456551",
	I1207 20:37:53.005013   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:53.005024   33734 command_runner.go:130] >         "value": "0"
	I1207 20:37:53.005033   33734 command_runner.go:130] >       },
	I1207 20:37:53.005043   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005052   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005062   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005074   33734 command_runner.go:130] >     },
	I1207 20:37:53.005080   33734 command_runner.go:130] >     {
	I1207 20:37:53.005090   33734 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1207 20:37:53.005100   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.005112   33734 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1207 20:37:53.005121   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005132   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.005146   33734 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1207 20:37:53.005161   33734 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1207 20:37:53.005167   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005173   33734 command_runner.go:130] >       "size": "127226832",
	I1207 20:37:53.005182   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:53.005193   33734 command_runner.go:130] >         "value": "0"
	I1207 20:37:53.005200   33734 command_runner.go:130] >       },
	I1207 20:37:53.005210   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005220   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005230   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005236   33734 command_runner.go:130] >     },
	I1207 20:37:53.005248   33734 command_runner.go:130] >     {
	I1207 20:37:53.005260   33734 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1207 20:37:53.005267   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.005275   33734 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1207 20:37:53.005284   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005292   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.005308   33734 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1207 20:37:53.005324   33734 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1207 20:37:53.005333   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005343   33734 command_runner.go:130] >       "size": "123261750",
	I1207 20:37:53.005350   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:53.005354   33734 command_runner.go:130] >         "value": "0"
	I1207 20:37:53.005359   33734 command_runner.go:130] >       },
	I1207 20:37:53.005370   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005381   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005388   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005397   33734 command_runner.go:130] >     },
	I1207 20:37:53.005406   33734 command_runner.go:130] >     {
	I1207 20:37:53.005422   33734 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1207 20:37:53.005432   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.005443   33734 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1207 20:37:53.005450   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005455   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.005469   33734 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1207 20:37:53.005485   33734 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1207 20:37:53.005495   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005502   33734 command_runner.go:130] >       "size": "74749335",
	I1207 20:37:53.005512   33734 command_runner.go:130] >       "uid": null,
	I1207 20:37:53.005521   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005531   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005538   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005547   33734 command_runner.go:130] >     },
	I1207 20:37:53.005551   33734 command_runner.go:130] >     {
	I1207 20:37:53.005562   33734 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1207 20:37:53.005573   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.005582   33734 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1207 20:37:53.005608   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005622   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.005683   33734 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1207 20:37:53.005700   33734 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1207 20:37:53.005706   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005714   33734 command_runner.go:130] >       "size": "61551410",
	I1207 20:37:53.005724   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:53.005731   33734 command_runner.go:130] >         "value": "0"
	I1207 20:37:53.005740   33734 command_runner.go:130] >       },
	I1207 20:37:53.005747   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005756   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005763   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005771   33734 command_runner.go:130] >     },
	I1207 20:37:53.005775   33734 command_runner.go:130] >     {
	I1207 20:37:53.005787   33734 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1207 20:37:53.005802   33734 command_runner.go:130] >       "repoTags": [
	I1207 20:37:53.005814   33734 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1207 20:37:53.005823   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005833   33734 command_runner.go:130] >       "repoDigests": [
	I1207 20:37:53.005847   33734 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1207 20:37:53.005862   33734 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1207 20:37:53.005869   33734 command_runner.go:130] >       ],
	I1207 20:37:53.005876   33734 command_runner.go:130] >       "size": "750414",
	I1207 20:37:53.005885   33734 command_runner.go:130] >       "uid": {
	I1207 20:37:53.005893   33734 command_runner.go:130] >         "value": "65535"
	I1207 20:37:53.005902   33734 command_runner.go:130] >       },
	I1207 20:37:53.005914   33734 command_runner.go:130] >       "username": "",
	I1207 20:37:53.005937   33734 command_runner.go:130] >       "spec": null,
	I1207 20:37:53.005948   33734 command_runner.go:130] >       "pinned": false
	I1207 20:37:53.005954   33734 command_runner.go:130] >     }
	I1207 20:37:53.005963   33734 command_runner.go:130] >   ]
	I1207 20:37:53.005969   33734 command_runner.go:130] > }
	I1207 20:37:53.006489   33734 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 20:37:53.006502   33734 cache_images.go:84] Images are preloaded, skipping loading
	I1207 20:37:53.006571   33734 ssh_runner.go:195] Run: crio config
	I1207 20:37:53.061020   33734 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1207 20:37:53.061047   33734 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1207 20:37:53.061057   33734 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1207 20:37:53.061062   33734 command_runner.go:130] > #
	I1207 20:37:53.061074   33734 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1207 20:37:53.061084   33734 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1207 20:37:53.061094   33734 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1207 20:37:53.061111   33734 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1207 20:37:53.061120   33734 command_runner.go:130] > # reload'.
	I1207 20:37:53.061134   33734 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1207 20:37:53.061145   33734 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1207 20:37:53.061156   33734 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1207 20:37:53.061181   33734 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1207 20:37:53.061185   33734 command_runner.go:130] > [crio]
	I1207 20:37:53.061191   33734 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1207 20:37:53.061196   33734 command_runner.go:130] > # containers images, in this directory.
	I1207 20:37:53.061207   33734 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1207 20:37:53.061222   33734 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1207 20:37:53.061233   33734 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1207 20:37:53.061243   33734 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1207 20:37:53.061257   33734 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1207 20:37:53.061265   33734 command_runner.go:130] > storage_driver = "overlay"
	I1207 20:37:53.061278   33734 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1207 20:37:53.061289   33734 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1207 20:37:53.061296   33734 command_runner.go:130] > storage_option = [
	I1207 20:37:53.061307   33734 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1207 20:37:53.061313   33734 command_runner.go:130] > ]
	I1207 20:37:53.061325   33734 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1207 20:37:53.061337   33734 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1207 20:37:53.061346   33734 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1207 20:37:53.061356   33734 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1207 20:37:53.061365   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1207 20:37:53.061372   33734 command_runner.go:130] > # always happen on a node reboot
	I1207 20:37:53.061383   33734 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1207 20:37:53.061396   33734 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1207 20:37:53.061409   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1207 20:37:53.061439   33734 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1207 20:37:53.061455   33734 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1207 20:37:53.061467   33734 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1207 20:37:53.061481   33734 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1207 20:37:53.061492   33734 command_runner.go:130] > # internal_wipe = true
	I1207 20:37:53.061507   33734 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1207 20:37:53.061519   33734 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1207 20:37:53.061531   33734 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1207 20:37:53.061559   33734 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1207 20:37:53.061567   33734 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1207 20:37:53.061571   33734 command_runner.go:130] > [crio.api]
	I1207 20:37:53.061578   33734 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1207 20:37:53.061587   33734 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1207 20:37:53.061595   33734 command_runner.go:130] > # IP address on which the stream server will listen.
	I1207 20:37:53.061600   33734 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1207 20:37:53.061606   33734 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1207 20:37:53.061613   33734 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1207 20:37:53.061617   33734 command_runner.go:130] > # stream_port = "0"
	I1207 20:37:53.061627   33734 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1207 20:37:53.061637   33734 command_runner.go:130] > # stream_enable_tls = false
	I1207 20:37:53.061647   33734 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1207 20:37:53.061657   33734 command_runner.go:130] > # stream_idle_timeout = ""
	I1207 20:37:53.061671   33734 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1207 20:37:53.061685   33734 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1207 20:37:53.061694   33734 command_runner.go:130] > # minutes.
	I1207 20:37:53.061701   33734 command_runner.go:130] > # stream_tls_cert = ""
	I1207 20:37:53.061729   33734 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1207 20:37:53.061742   33734 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1207 20:37:53.061752   33734 command_runner.go:130] > # stream_tls_key = ""
	I1207 20:37:53.061762   33734 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1207 20:37:53.061780   33734 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1207 20:37:53.061792   33734 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1207 20:37:53.061799   33734 command_runner.go:130] > # stream_tls_ca = ""
	I1207 20:37:53.061814   33734 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:37:53.061825   33734 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1207 20:37:53.061836   33734 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:37:53.061848   33734 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1207 20:37:53.061874   33734 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1207 20:37:53.061886   33734 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1207 20:37:53.061893   33734 command_runner.go:130] > [crio.runtime]
	I1207 20:37:53.061905   33734 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1207 20:37:53.061918   33734 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1207 20:37:53.061952   33734 command_runner.go:130] > # "nofile=1024:2048"
	I1207 20:37:53.061963   33734 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1207 20:37:53.061972   33734 command_runner.go:130] > # default_ulimits = [
	I1207 20:37:53.061978   33734 command_runner.go:130] > # ]
	I1207 20:37:53.061989   33734 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1207 20:37:53.061997   33734 command_runner.go:130] > # no_pivot = false
	I1207 20:37:53.062007   33734 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1207 20:37:53.062015   33734 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1207 20:37:53.062020   33734 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1207 20:37:53.062027   33734 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1207 20:37:53.062032   33734 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1207 20:37:53.062043   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:37:53.062051   33734 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1207 20:37:53.062060   33734 command_runner.go:130] > # Cgroup setting for conmon
	I1207 20:37:53.062071   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1207 20:37:53.062081   33734 command_runner.go:130] > conmon_cgroup = "pod"
	I1207 20:37:53.062094   33734 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1207 20:37:53.062106   33734 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1207 20:37:53.062120   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:37:53.062127   33734 command_runner.go:130] > conmon_env = [
	I1207 20:37:53.062140   33734 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1207 20:37:53.062149   33734 command_runner.go:130] > ]
	I1207 20:37:53.062159   33734 command_runner.go:130] > # Additional environment variables to set for all the
	I1207 20:37:53.062170   33734 command_runner.go:130] > # containers. These are overridden if set in the
	I1207 20:37:53.062184   33734 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1207 20:37:53.062194   33734 command_runner.go:130] > # default_env = [
	I1207 20:37:53.062199   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062210   33734 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1207 20:37:53.062217   33734 command_runner.go:130] > # selinux = false
	I1207 20:37:53.062223   33734 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1207 20:37:53.062235   33734 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1207 20:37:53.062248   33734 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1207 20:37:53.062258   33734 command_runner.go:130] > # seccomp_profile = ""
	I1207 20:37:53.062272   33734 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1207 20:37:53.062284   33734 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1207 20:37:53.062297   33734 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1207 20:37:53.062308   33734 command_runner.go:130] > # which might increase security.
	I1207 20:37:53.062319   33734 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1207 20:37:53.062333   33734 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1207 20:37:53.062347   33734 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1207 20:37:53.062360   33734 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1207 20:37:53.062374   33734 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1207 20:37:53.062391   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:37:53.062402   33734 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1207 20:37:53.062454   33734 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1207 20:37:53.062468   33734 command_runner.go:130] > # the cgroup blockio controller.
	I1207 20:37:53.062475   33734 command_runner.go:130] > # blockio_config_file = ""
	I1207 20:37:53.062486   33734 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1207 20:37:53.062493   33734 command_runner.go:130] > # irqbalance daemon.
	I1207 20:37:53.062501   33734 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1207 20:37:53.062512   33734 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1207 20:37:53.062525   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:37:53.062533   33734 command_runner.go:130] > # rdt_config_file = ""
	I1207 20:37:53.062543   33734 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1207 20:37:53.062554   33734 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1207 20:37:53.062564   33734 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1207 20:37:53.062574   33734 command_runner.go:130] > # separate_pull_cgroup = ""
	I1207 20:37:53.062586   33734 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1207 20:37:53.062596   33734 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1207 20:37:53.062600   33734 command_runner.go:130] > # will be added.
	I1207 20:37:53.062606   33734 command_runner.go:130] > # default_capabilities = [
	I1207 20:37:53.062610   33734 command_runner.go:130] > # 	"CHOWN",
	I1207 20:37:53.062614   33734 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1207 20:37:53.062617   33734 command_runner.go:130] > # 	"FSETID",
	I1207 20:37:53.062621   33734 command_runner.go:130] > # 	"FOWNER",
	I1207 20:37:53.062627   33734 command_runner.go:130] > # 	"SETGID",
	I1207 20:37:53.062630   33734 command_runner.go:130] > # 	"SETUID",
	I1207 20:37:53.062637   33734 command_runner.go:130] > # 	"SETPCAP",
	I1207 20:37:53.062640   33734 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1207 20:37:53.062644   33734 command_runner.go:130] > # 	"KILL",
	I1207 20:37:53.062647   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062653   33734 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1207 20:37:53.062660   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:37:53.062664   33734 command_runner.go:130] > # default_sysctls = [
	I1207 20:37:53.062667   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062672   33734 command_runner.go:130] > # List of devices on the host that a
	I1207 20:37:53.062680   33734 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1207 20:37:53.062684   33734 command_runner.go:130] > # allowed_devices = [
	I1207 20:37:53.062690   33734 command_runner.go:130] > # 	"/dev/fuse",
	I1207 20:37:53.062696   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062703   33734 command_runner.go:130] > # List of additional devices. specified as
	I1207 20:37:53.062718   33734 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1207 20:37:53.062729   33734 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1207 20:37:53.062784   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:37:53.062798   33734 command_runner.go:130] > # additional_devices = [
	I1207 20:37:53.062804   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062815   33734 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1207 20:37:53.062825   33734 command_runner.go:130] > # cdi_spec_dirs = [
	I1207 20:37:53.062835   33734 command_runner.go:130] > # 	"/etc/cdi",
	I1207 20:37:53.062841   33734 command_runner.go:130] > # 	"/var/run/cdi",
	I1207 20:37:53.062847   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062854   33734 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1207 20:37:53.062864   33734 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1207 20:37:53.062872   33734 command_runner.go:130] > # Defaults to false.
	I1207 20:37:53.062883   33734 command_runner.go:130] > # device_ownership_from_security_context = false
	I1207 20:37:53.062897   33734 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1207 20:37:53.062914   33734 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1207 20:37:53.062924   33734 command_runner.go:130] > # hooks_dir = [
	I1207 20:37:53.062937   33734 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1207 20:37:53.062946   33734 command_runner.go:130] > # ]
	I1207 20:37:53.062956   33734 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1207 20:37:53.062969   33734 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1207 20:37:53.062981   33734 command_runner.go:130] > # its default mounts from the following two files:
	I1207 20:37:53.062990   33734 command_runner.go:130] > #
	I1207 20:37:53.063001   33734 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1207 20:37:53.063014   33734 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1207 20:37:53.063026   33734 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1207 20:37:53.063034   33734 command_runner.go:130] > #
	I1207 20:37:53.063043   33734 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1207 20:37:53.063052   33734 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1207 20:37:53.063058   33734 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1207 20:37:53.063065   33734 command_runner.go:130] > #      only add mounts it finds in this file.
	I1207 20:37:53.063073   33734 command_runner.go:130] > #
	I1207 20:37:53.063079   33734 command_runner.go:130] > # default_mounts_file = ""
	I1207 20:37:53.063087   33734 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1207 20:37:53.063096   33734 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1207 20:37:53.063100   33734 command_runner.go:130] > pids_limit = 1024
	I1207 20:37:53.063106   33734 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1207 20:37:53.063114   33734 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1207 20:37:53.063120   33734 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1207 20:37:53.063130   33734 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1207 20:37:53.063134   33734 command_runner.go:130] > # log_size_max = -1
	I1207 20:37:53.063141   33734 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1207 20:37:53.063147   33734 command_runner.go:130] > # log_to_journald = false
	I1207 20:37:53.063153   33734 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1207 20:37:53.063160   33734 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1207 20:37:53.063165   33734 command_runner.go:130] > # Path to directory for container attach sockets.
	I1207 20:37:53.063172   33734 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1207 20:37:53.063178   33734 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1207 20:37:53.063184   33734 command_runner.go:130] > # bind_mount_prefix = ""
	I1207 20:37:53.063189   33734 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1207 20:37:53.063196   33734 command_runner.go:130] > # read_only = false
	I1207 20:37:53.063204   33734 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1207 20:37:53.063212   33734 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1207 20:37:53.063216   33734 command_runner.go:130] > # live configuration reload.
	I1207 20:37:53.063244   33734 command_runner.go:130] > # log_level = "info"
	I1207 20:37:53.063253   33734 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1207 20:37:53.063258   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:37:53.063264   33734 command_runner.go:130] > # log_filter = ""
	I1207 20:37:53.063269   33734 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1207 20:37:53.063277   33734 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1207 20:37:53.063281   33734 command_runner.go:130] > # separated by comma.
	I1207 20:37:53.063288   33734 command_runner.go:130] > # uid_mappings = ""
	I1207 20:37:53.063293   33734 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1207 20:37:53.063301   33734 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1207 20:37:53.063306   33734 command_runner.go:130] > # separated by comma.
	I1207 20:37:53.063310   33734 command_runner.go:130] > # gid_mappings = ""
	I1207 20:37:53.063316   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1207 20:37:53.063324   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:37:53.063330   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:37:53.063339   33734 command_runner.go:130] > # minimum_mappable_uid = -1
	I1207 20:37:53.063345   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1207 20:37:53.063354   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:37:53.063360   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:37:53.063366   33734 command_runner.go:130] > # minimum_mappable_gid = -1
	I1207 20:37:53.063371   33734 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1207 20:37:53.063379   33734 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1207 20:37:53.063384   33734 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1207 20:37:53.063391   33734 command_runner.go:130] > # ctr_stop_timeout = 30
	I1207 20:37:53.063397   33734 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1207 20:37:53.063405   33734 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1207 20:37:53.063410   33734 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1207 20:37:53.063415   33734 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1207 20:37:53.063419   33734 command_runner.go:130] > drop_infra_ctr = false
	I1207 20:37:53.063426   33734 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1207 20:37:53.063433   33734 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1207 20:37:53.063441   33734 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1207 20:37:53.063447   33734 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1207 20:37:53.063456   33734 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1207 20:37:53.063463   33734 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1207 20:37:53.063468   33734 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1207 20:37:53.063477   33734 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1207 20:37:53.063481   33734 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1207 20:37:53.063490   33734 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1207 20:37:53.063496   33734 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1207 20:37:53.063502   33734 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1207 20:37:53.063509   33734 command_runner.go:130] > # default_runtime = "runc"
	I1207 20:37:53.063516   33734 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1207 20:37:53.063525   33734 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1207 20:37:53.063533   33734 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1207 20:37:53.063540   33734 command_runner.go:130] > # creation as a file is not desired either.
	I1207 20:37:53.063548   33734 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1207 20:37:53.063555   33734 command_runner.go:130] > # the hostname is being managed dynamically.
	I1207 20:37:53.063559   33734 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1207 20:37:53.063565   33734 command_runner.go:130] > # ]
	I1207 20:37:53.063571   33734 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1207 20:37:53.063582   33734 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1207 20:37:53.063588   33734 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1207 20:37:53.063596   33734 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1207 20:37:53.063599   33734 command_runner.go:130] > #
	I1207 20:37:53.063604   33734 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1207 20:37:53.063611   33734 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1207 20:37:53.063615   33734 command_runner.go:130] > #  runtime_type = "oci"
	I1207 20:37:53.063620   33734 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1207 20:37:53.063625   33734 command_runner.go:130] > #  privileged_without_host_devices = false
	I1207 20:37:53.063631   33734 command_runner.go:130] > #  allowed_annotations = []
	I1207 20:37:53.063635   33734 command_runner.go:130] > # Where:
	I1207 20:37:53.063640   33734 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1207 20:37:53.063646   33734 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1207 20:37:53.063654   33734 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1207 20:37:53.063662   33734 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1207 20:37:53.063666   33734 command_runner.go:130] > #   in $PATH.
	I1207 20:37:53.063675   33734 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1207 20:37:53.063680   33734 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1207 20:37:53.063690   33734 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1207 20:37:53.063696   33734 command_runner.go:130] > #   state.
	I1207 20:37:53.063702   33734 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1207 20:37:53.063707   33734 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1207 20:37:53.063728   33734 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1207 20:37:53.063735   33734 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1207 20:37:53.063742   33734 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1207 20:37:53.063750   33734 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1207 20:37:53.063755   33734 command_runner.go:130] > #   The currently recognized values are:
	I1207 20:37:53.063765   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1207 20:37:53.063772   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1207 20:37:53.063780   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1207 20:37:53.063787   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1207 20:37:53.063796   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1207 20:37:53.063802   33734 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1207 20:37:53.063810   33734 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1207 20:37:53.063816   33734 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1207 20:37:53.063823   33734 command_runner.go:130] > #   should be moved to the container's cgroup
	I1207 20:37:53.063829   33734 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1207 20:37:53.063836   33734 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1207 20:37:53.063840   33734 command_runner.go:130] > runtime_type = "oci"
	I1207 20:37:53.063846   33734 command_runner.go:130] > runtime_root = "/run/runc"
	I1207 20:37:53.063850   33734 command_runner.go:130] > runtime_config_path = ""
	I1207 20:37:53.063856   33734 command_runner.go:130] > monitor_path = ""
	I1207 20:37:53.063860   33734 command_runner.go:130] > monitor_cgroup = ""
	I1207 20:37:53.063864   33734 command_runner.go:130] > monitor_exec_cgroup = ""
	I1207 20:37:53.063872   33734 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1207 20:37:53.063876   33734 command_runner.go:130] > # running containers
	I1207 20:37:53.063884   33734 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1207 20:37:53.063890   33734 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1207 20:37:53.063939   33734 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1207 20:37:53.063947   33734 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1207 20:37:53.063952   33734 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1207 20:37:53.063956   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1207 20:37:53.063961   33734 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1207 20:37:53.063966   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1207 20:37:53.063977   33734 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1207 20:37:53.063983   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1207 20:37:53.063990   33734 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1207 20:37:53.063997   33734 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1207 20:37:53.064003   33734 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1207 20:37:53.064012   33734 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1207 20:37:53.064019   33734 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1207 20:37:53.064027   33734 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1207 20:37:53.064035   33734 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1207 20:37:53.064047   33734 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1207 20:37:53.064054   33734 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1207 20:37:53.064061   33734 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1207 20:37:53.064065   33734 command_runner.go:130] > # Example:
	I1207 20:37:53.064070   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1207 20:37:53.064076   33734 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1207 20:37:53.064080   33734 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1207 20:37:53.064087   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1207 20:37:53.064091   33734 command_runner.go:130] > # cpuset = 0
	I1207 20:37:53.064099   33734 command_runner.go:130] > # cpushares = "0-1"
	I1207 20:37:53.064105   33734 command_runner.go:130] > # Where:
	I1207 20:37:53.064109   33734 command_runner.go:130] > # The workload name is workload-type.
	I1207 20:37:53.064116   33734 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1207 20:37:53.064123   33734 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1207 20:37:53.064128   33734 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1207 20:37:53.064136   33734 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1207 20:37:53.064144   33734 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1207 20:37:53.064147   33734 command_runner.go:130] > # 
	I1207 20:37:53.064153   33734 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1207 20:37:53.064159   33734 command_runner.go:130] > #
	I1207 20:37:53.064164   33734 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1207 20:37:53.064171   33734 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1207 20:37:53.064178   33734 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1207 20:37:53.064184   33734 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1207 20:37:53.064192   33734 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1207 20:37:53.064196   33734 command_runner.go:130] > [crio.image]
	I1207 20:37:53.064204   33734 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1207 20:37:53.064223   33734 command_runner.go:130] > # default_transport = "docker://"
	I1207 20:37:53.064231   33734 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1207 20:37:53.064237   33734 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:37:53.064244   33734 command_runner.go:130] > # global_auth_file = ""
	I1207 20:37:53.064249   33734 command_runner.go:130] > # The image used to instantiate infra containers.
	I1207 20:37:53.064256   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:37:53.064260   33734 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1207 20:37:53.064268   33734 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1207 20:37:53.064274   33734 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:37:53.064283   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:37:53.064288   33734 command_runner.go:130] > # pause_image_auth_file = ""
	I1207 20:37:53.064296   33734 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1207 20:37:53.064302   33734 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1207 20:37:53.064310   33734 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1207 20:37:53.064315   33734 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1207 20:37:53.064322   33734 command_runner.go:130] > # pause_command = "/pause"
	I1207 20:37:53.064327   33734 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1207 20:37:53.064337   33734 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1207 20:37:53.064350   33734 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1207 20:37:53.064358   33734 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1207 20:37:53.064363   33734 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1207 20:37:53.064369   33734 command_runner.go:130] > # signature_policy = ""
	I1207 20:37:53.064375   33734 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1207 20:37:53.064381   33734 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1207 20:37:53.064385   33734 command_runner.go:130] > # changing them here.
	I1207 20:37:53.064389   33734 command_runner.go:130] > # insecure_registries = [
	I1207 20:37:53.064392   33734 command_runner.go:130] > # ]
	I1207 20:37:53.064397   33734 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1207 20:37:53.064402   33734 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1207 20:37:53.064406   33734 command_runner.go:130] > # image_volumes = "mkdir"
	I1207 20:37:53.064411   33734 command_runner.go:130] > # Temporary directory to use for storing big files
	I1207 20:37:53.064415   33734 command_runner.go:130] > # big_files_temporary_dir = ""
	I1207 20:37:53.064420   33734 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1207 20:37:53.064424   33734 command_runner.go:130] > # CNI plugins.
	I1207 20:37:53.064428   33734 command_runner.go:130] > [crio.network]
	I1207 20:37:53.064436   33734 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1207 20:37:53.064443   33734 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1207 20:37:53.064447   33734 command_runner.go:130] > # cni_default_network = ""
	I1207 20:37:53.064452   33734 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1207 20:37:53.064456   33734 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1207 20:37:53.064461   33734 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1207 20:37:53.064465   33734 command_runner.go:130] > # plugin_dirs = [
	I1207 20:37:53.064468   33734 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1207 20:37:53.064471   33734 command_runner.go:130] > # ]
	I1207 20:37:53.064477   33734 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1207 20:37:53.064480   33734 command_runner.go:130] > [crio.metrics]
	I1207 20:37:53.064485   33734 command_runner.go:130] > # Globally enable or disable metrics support.
	I1207 20:37:53.064488   33734 command_runner.go:130] > enable_metrics = true
	I1207 20:37:53.064493   33734 command_runner.go:130] > # Specify enabled metrics collectors.
	I1207 20:37:53.064499   33734 command_runner.go:130] > # Per default all metrics are enabled.
	I1207 20:37:53.064505   33734 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1207 20:37:53.064510   33734 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1207 20:37:53.064515   33734 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1207 20:37:53.064519   33734 command_runner.go:130] > # metrics_collectors = [
	I1207 20:37:53.064525   33734 command_runner.go:130] > # 	"operations",
	I1207 20:37:53.064529   33734 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1207 20:37:53.064533   33734 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1207 20:37:53.064537   33734 command_runner.go:130] > # 	"operations_errors",
	I1207 20:37:53.064541   33734 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1207 20:37:53.064545   33734 command_runner.go:130] > # 	"image_pulls_by_name",
	I1207 20:37:53.064549   33734 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1207 20:37:53.064552   33734 command_runner.go:130] > # 	"image_pulls_failures",
	I1207 20:37:53.064556   33734 command_runner.go:130] > # 	"image_pulls_successes",
	I1207 20:37:53.064560   33734 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1207 20:37:53.064564   33734 command_runner.go:130] > # 	"image_layer_reuse",
	I1207 20:37:53.064568   33734 command_runner.go:130] > # 	"containers_oom_total",
	I1207 20:37:53.064573   33734 command_runner.go:130] > # 	"containers_oom",
	I1207 20:37:53.064577   33734 command_runner.go:130] > # 	"processes_defunct",
	I1207 20:37:53.064580   33734 command_runner.go:130] > # 	"operations_total",
	I1207 20:37:53.064584   33734 command_runner.go:130] > # 	"operations_latency_seconds",
	I1207 20:37:53.064589   33734 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1207 20:37:53.064592   33734 command_runner.go:130] > # 	"operations_errors_total",
	I1207 20:37:53.064598   33734 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1207 20:37:53.064603   33734 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1207 20:37:53.064608   33734 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1207 20:37:53.064615   33734 command_runner.go:130] > # 	"image_pulls_success_total",
	I1207 20:37:53.064620   33734 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1207 20:37:53.064624   33734 command_runner.go:130] > # 	"containers_oom_count_total",
	I1207 20:37:53.064628   33734 command_runner.go:130] > # ]
	I1207 20:37:53.064633   33734 command_runner.go:130] > # The port on which the metrics server will listen.
	I1207 20:37:53.064637   33734 command_runner.go:130] > # metrics_port = 9090
	I1207 20:37:53.064642   33734 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1207 20:37:53.064648   33734 command_runner.go:130] > # metrics_socket = ""
	I1207 20:37:53.064653   33734 command_runner.go:130] > # The certificate for the secure metrics server.
	I1207 20:37:53.064661   33734 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1207 20:37:53.064667   33734 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1207 20:37:53.064674   33734 command_runner.go:130] > # certificate on any modification event.
	I1207 20:37:53.064678   33734 command_runner.go:130] > # metrics_cert = ""
	I1207 20:37:53.064685   33734 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1207 20:37:53.064690   33734 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1207 20:37:53.064696   33734 command_runner.go:130] > # metrics_key = ""
	I1207 20:37:53.064704   33734 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1207 20:37:53.064709   33734 command_runner.go:130] > [crio.tracing]
	I1207 20:37:53.064719   33734 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1207 20:37:53.064734   33734 command_runner.go:130] > # enable_tracing = false
	I1207 20:37:53.064741   33734 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1207 20:37:53.064746   33734 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1207 20:37:53.064752   33734 command_runner.go:130] > # Number of samples to collect per million spans.
	I1207 20:37:53.064757   33734 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1207 20:37:53.064763   33734 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1207 20:37:53.064769   33734 command_runner.go:130] > [crio.stats]
	I1207 20:37:53.064775   33734 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1207 20:37:53.064782   33734 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1207 20:37:53.064786   33734 command_runner.go:130] > # stats_collection_period = 0
	I1207 20:37:53.065222   33734 command_runner.go:130] ! time="2023-12-07 20:37:53.008783704Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1207 20:37:53.065238   33734 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1207 20:37:53.065300   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:37:53.065311   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:37:53.065329   33734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:37:53.065348   33734 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-660958 NodeName:multinode-660958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:37:53.065470   33734 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-660958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:37:53.065549   33734 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-660958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:37:53.065599   33734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:37:53.075155   33734 command_runner.go:130] > kubeadm
	I1207 20:37:53.075176   33734 command_runner.go:130] > kubectl
	I1207 20:37:53.075182   33734 command_runner.go:130] > kubelet
	I1207 20:37:53.075202   33734 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:37:53.075256   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:37:53.084144   33734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1207 20:37:53.099079   33734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:37:53.113961   33734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1207 20:37:53.129494   33734 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I1207 20:37:53.132944   33734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:37:53.143377   33734 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958 for IP: 192.168.39.19
	I1207 20:37:53.143400   33734 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:37:53.143536   33734 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:37:53.143588   33734 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:37:53.143664   33734 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key
	I1207 20:37:53.143732   33734 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key.8a6f02ba
	I1207 20:37:53.143779   33734 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key
	I1207 20:37:53.143789   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 20:37:53.143802   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 20:37:53.143817   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 20:37:53.143832   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 20:37:53.143866   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:37:53.143881   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:37:53.143896   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:37:53.143910   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:37:53.143972   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:37:53.144000   33734 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:37:53.144011   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:37:53.144044   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:37:53.144069   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:37:53.144096   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:37:53.144146   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:37:53.144172   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:37:53.144196   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:37:53.144211   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:37:53.144847   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:37:53.166867   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 20:37:53.187451   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:37:53.208177   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 20:37:53.229279   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:37:53.250802   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:37:53.272379   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:37:53.293807   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:37:53.315330   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:37:53.337555   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:37:53.359510   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:37:53.381608   33734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:37:53.397856   33734 ssh_runner.go:195] Run: openssl version
	I1207 20:37:53.403219   33734 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1207 20:37:53.403282   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:37:53.413500   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:37:53.417870   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:37:53.418111   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:37:53.418174   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:37:53.423439   33734 command_runner.go:130] > 51391683
	I1207 20:37:53.423583   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:37:53.434085   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:37:53.444444   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:37:53.448965   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:37:53.449005   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:37:53.449046   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:37:53.454144   33734 command_runner.go:130] > 3ec20f2e
	I1207 20:37:53.454391   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:37:53.465030   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:37:53.475205   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:37:53.479568   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:37:53.479591   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:37:53.479629   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:37:53.485022   33734 command_runner.go:130] > b5213941
	I1207 20:37:53.485096   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:37:53.495510   33734 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:37:53.499922   33734 command_runner.go:130] > ca.crt
	I1207 20:37:53.499935   33734 command_runner.go:130] > ca.key
	I1207 20:37:53.499942   33734 command_runner.go:130] > healthcheck-client.crt
	I1207 20:37:53.499949   33734 command_runner.go:130] > healthcheck-client.key
	I1207 20:37:53.499956   33734 command_runner.go:130] > peer.crt
	I1207 20:37:53.499961   33734 command_runner.go:130] > peer.key
	I1207 20:37:53.499967   33734 command_runner.go:130] > server.crt
	I1207 20:37:53.499974   33734 command_runner.go:130] > server.key
	I1207 20:37:53.500130   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 20:37:53.505671   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.505727   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 20:37:53.511177   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.511490   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 20:37:53.516896   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.516957   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 20:37:53.522201   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.522434   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 20:37:53.528029   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.528079   33734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 20:37:53.533571   33734 command_runner.go:130] > Certificate will not expire
	I1207 20:37:53.533906   33734 kubeadm.go:404] StartCluster: {Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:37:53.534037   33734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 20:37:53.534079   33734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:37:53.569650   33734 cri.go:89] found id: ""
	I1207 20:37:53.569719   33734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:37:53.579569   33734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1207 20:37:53.579587   33734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1207 20:37:53.579592   33734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1207 20:37:53.579596   33734 command_runner.go:130] > member
	I1207 20:37:53.579846   33734 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 20:37:53.579878   33734 kubeadm.go:636] restartCluster start
	I1207 20:37:53.579935   33734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 20:37:53.589491   33734 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:53.590149   33734 kubeconfig.go:92] found "multinode-660958" server: "https://192.168.39.19:8443"
	I1207 20:37:53.590637   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:37:53.590906   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:37:53.591470   33734 cert_rotation.go:137] Starting client certificate rotation controller
	I1207 20:37:53.591657   33734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 20:37:53.601185   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:53.601239   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:53.612611   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:53.612627   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:53.612669   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:53.624017   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:54.124693   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:54.124800   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:54.137527   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:54.624078   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:54.624158   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:54.637043   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:55.124589   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:55.124693   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:55.136747   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:55.624279   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:55.624343   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:55.636760   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:56.124203   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:56.124285   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:56.137344   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:56.624982   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:56.625063   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:56.637092   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:57.124764   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:57.124877   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:57.137008   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:57.625140   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:57.625221   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:57.637314   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:58.125036   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:58.125126   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:58.138338   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:58.624969   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:58.625054   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:58.637201   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:59.124753   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:59.124825   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:59.136626   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:37:59.624361   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:37:59.624447   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:37:59.635949   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:00.124433   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:00.124508   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:00.136139   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:00.624758   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:00.624837   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:00.636692   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:01.124247   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:01.124324   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:01.136049   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:01.624571   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:01.624671   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:01.636233   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:02.124925   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:02.125039   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:02.138131   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:02.624145   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:02.624262   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:02.636203   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:03.124816   33734 api_server.go:166] Checking apiserver status ...
	I1207 20:38:03.124880   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:38:03.136394   33734 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:38:03.602031   33734 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 20:38:03.602059   33734 kubeadm.go:1135] stopping kube-system containers ...
	I1207 20:38:03.602069   33734 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 20:38:03.602128   33734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:38:03.645562   33734 cri.go:89] found id: ""
	I1207 20:38:03.645626   33734 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 20:38:03.661455   33734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:38:03.671228   33734 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1207 20:38:03.671250   33734 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1207 20:38:03.671257   33734 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1207 20:38:03.671263   33734 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:38:03.671297   33734 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:38:03.671365   33734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:38:03.680711   33734 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 20:38:03.680733   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:03.788450   33734 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:38:03.788826   33734 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1207 20:38:03.789325   33734 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1207 20:38:03.789799   33734 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 20:38:03.790487   33734 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1207 20:38:03.790959   33734 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1207 20:38:03.791829   33734 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1207 20:38:03.792283   33734 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1207 20:38:03.792782   33734 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1207 20:38:03.793469   33734 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 20:38:03.794043   33734 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 20:38:03.794669   33734 command_runner.go:130] > [certs] Using the existing "sa" key
	I1207 20:38:03.795912   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:03.847401   33734 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:38:04.000182   33734 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:38:04.266649   33734 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:38:04.758432   33734 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:38:04.975376   33734 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:38:04.978424   33734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.182487255s)
	I1207 20:38:04.978462   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:05.156546   33734 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:38:05.156574   33734 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:38:05.156583   33734 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1207 20:38:05.156613   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:05.241421   33734 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:38:05.241469   33734 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:38:05.244545   33734 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:38:05.245587   33734 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:38:05.247644   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:05.326808   33734 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:38:05.332231   33734 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:38:05.332322   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:05.345576   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:05.859224   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:06.359321   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:06.859193   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:07.358849   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:07.859549   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:07.877079   33734 command_runner.go:130] > 1095
	I1207 20:38:07.877117   33734 api_server.go:72] duration metric: took 2.544890839s to wait for apiserver process to appear ...
	I1207 20:38:07.877131   33734 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:38:07.877149   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:11.255512   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:38:11.255545   33734 api_server.go:103] status: https://192.168.39.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:38:11.255558   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:11.305527   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:38:11.305561   33734 api_server.go:103] status: https://192.168.39.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:38:11.806263   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:11.815523   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:38:11.815550   33734 api_server.go:103] status: https://192.168.39.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:38:12.306126   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:12.311347   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:38:12.311377   33734 api_server.go:103] status: https://192.168.39.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:38:12.806327   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:12.811585   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1207 20:38:12.811672   33734 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1207 20:38:12.811683   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:12.811696   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:12.811709   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:12.820635   33734 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1207 20:38:12.820659   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:12.820666   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:12.820672   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:12.820677   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:12.820683   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:12.820688   33734 round_trippers.go:580]     Content-Length: 264
	I1207 20:38:12.820693   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:12 GMT
	I1207 20:38:12.820698   33734 round_trippers.go:580]     Audit-Id: 5c86a52d-8053-46a4-b332-11d078212f69
	I1207 20:38:12.820724   33734 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1207 20:38:12.820832   33734 api_server.go:141] control plane version: v1.28.4
	I1207 20:38:12.820856   33734 api_server.go:131] duration metric: took 4.943717825s to wait for apiserver health ...
	I1207 20:38:12.820871   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:38:12.820877   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:38:12.823057   33734 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1207 20:38:12.824690   33734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 20:38:12.833169   33734 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1207 20:38:12.833197   33734 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1207 20:38:12.833205   33734 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1207 20:38:12.833214   33734 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:38:12.833233   33734 command_runner.go:130] > Access: 2023-12-07 20:37:40.624910444 +0000
	I1207 20:38:12.833242   33734 command_runner.go:130] > Modify: 2023-12-05 19:27:41.000000000 +0000
	I1207 20:38:12.833249   33734 command_runner.go:130] > Change: 2023-12-07 20:37:38.610910444 +0000
	I1207 20:38:12.833263   33734 command_runner.go:130] >  Birth: -
	I1207 20:38:12.833458   33734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 20:38:12.833474   33734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 20:38:12.853884   33734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 20:38:13.947129   33734 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:38:13.951850   33734 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:38:13.954679   33734 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1207 20:38:13.973155   33734 command_runner.go:130] > daemonset.apps/kindnet configured
	I1207 20:38:13.978710   33734 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.124789247s)
	I1207 20:38:13.978749   33734 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:38:13.978851   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:13.978862   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:13.978872   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:13.978881   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:13.983161   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:13.983181   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:13.983188   33734 round_trippers.go:580]     Audit-Id: 1b3d1575-f6ad-442b-a0df-7352526f867a
	I1207 20:38:13.983193   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:13.983199   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:13.983207   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:13.983214   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:13.983222   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:13 GMT
	I1207 20:38:13.985415   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"810"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82021 chars]
	I1207 20:38:13.989366   33734 system_pods.go:59] 12 kube-system pods found
	I1207 20:38:13.989398   33734 system_pods.go:61] "coredns-5dd5756b68-7mss7" [6d6632ea-9aae-43e7-8b17-56399870082b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 20:38:13.989406   33734 system_pods.go:61] "etcd-multinode-660958" [997363d1-ef51-46b9-98ad-276aa803f3a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 20:38:13.989410   33734 system_pods.go:61] "kindnet-6flr5" [efdf3123-c2fd-4176-a308-0f104695b591] Running
	I1207 20:38:13.989415   33734 system_pods.go:61] "kindnet-d764j" [d1d942b5-9598-4a7d-bd1e-a283e096451c] Running
	I1207 20:38:13.989423   33734 system_pods.go:61] "kindnet-jpfqs" [158552a2-294c-4d08-81de-05b1daf7dfe1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 20:38:13.989433   33734 system_pods.go:61] "kube-apiserver-multinode-660958" [ab5b9260-db2a-4625-aff0-8b0fcf6a74a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 20:38:13.989444   33734 system_pods.go:61] "kube-controller-manager-multinode-660958" [fb58a1b4-61c1-41c6-b3af-824cc7a08c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 20:38:13.989456   33734 system_pods.go:61] "kube-proxy-mjptg" [1f4f9d19-e657-4472-a434-2e0810ba6cf3] Running
	I1207 20:38:13.989462   33734 system_pods.go:61] "kube-proxy-pfc45" [1e39fc15-3b2e-418c-92f1-32570e3bd853] Running
	I1207 20:38:13.989467   33734 system_pods.go:61] "kube-proxy-rxqfp" [c06f17e2-4050-4554-8c4a-057bca0bb5ff] Running
	I1207 20:38:13.989472   33734 system_pods.go:61] "kube-scheduler-multinode-660958" [ff5eb685-6086-4a98-b3b9-a485746dcbd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 20:38:13.989479   33734 system_pods.go:61] "storage-provisioner" [48bcf9dc-632d-4f04-9f6a-04d31cef5d88] Running
	I1207 20:38:13.989488   33734 system_pods.go:74] duration metric: took 10.732225ms to wait for pod list to return data ...
	I1207 20:38:13.989494   33734 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:38:13.989554   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:38:13.989564   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:13.989572   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:13.989580   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:13.992640   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:13.992663   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:13.992673   33734 round_trippers.go:580]     Audit-Id: 3529fb44-4057-4614-8f04-941ead9d98c7
	I1207 20:38:13.992682   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:13.992691   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:13.992702   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:13.992714   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:13.992722   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:13 GMT
	I1207 20:38:13.993017   33734 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"810"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16353 chars]
	I1207 20:38:13.993860   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:13.993885   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:13.993895   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:13.993907   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:13.993913   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:13.993938   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:13.993946   33734 node_conditions.go:105] duration metric: took 4.446924ms to run NodePressure ...
	I1207 20:38:13.993963   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:38:14.216369   33734 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1207 20:38:14.216398   33734 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1207 20:38:14.216422   33734 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 20:38:14.216508   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1207 20:38:14.216517   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.216524   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.216530   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.219886   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:14.219908   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.219931   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.219937   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.219942   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.219947   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.219952   33734 round_trippers.go:580]     Audit-Id: 4f822079-bc50-473e-b62c-507ebef8cab8
	I1207 20:38:14.219958   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.220599   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"734","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1207 20:38:14.221974   33734 kubeadm.go:787] kubelet initialised
	I1207 20:38:14.221998   33734 kubeadm.go:788] duration metric: took 5.5646ms waiting for restarted kubelet to initialise ...
	I1207 20:38:14.222011   33734 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:38:14.222084   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:14.222093   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.222104   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.222114   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.226730   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:14.226752   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.226761   33734 round_trippers.go:580]     Audit-Id: 1f05641b-01a0-415b-b5bc-1df4c280cb5d
	I1207 20:38:14.226770   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.226778   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.226785   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.226801   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.226813   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.228097   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82021 chars]
	I1207 20:38:14.231099   33734 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.231186   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:14.231198   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.231208   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.231217   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.236674   33734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1207 20:38:14.236689   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.236696   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.236702   33734 round_trippers.go:580]     Audit-Id: 3c24bf59-7d92-46f9-86f5-974652c3673c
	I1207 20:38:14.236710   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.236718   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.236725   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.236734   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.236832   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:14.237214   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:14.237226   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.237233   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.237239   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.239783   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.239795   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.239801   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.239806   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.239811   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.239816   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.239821   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.239825   33734 round_trippers.go:580]     Audit-Id: 4b84807f-f919-411f-9a94-cb4cc0ecc82d
	I1207 20:38:14.240181   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:14.240502   33734 pod_ready.go:97] node "multinode-660958" hosting pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.240522   33734 pod_ready.go:81] duration metric: took 9.397872ms waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:14.240531   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.240551   33734 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.240618   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:38:14.240628   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.240638   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.240651   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.243046   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.243059   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.243064   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.243070   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.243075   33734 round_trippers.go:580]     Audit-Id: 5feb8ae4-ddab-46e7-868a-3fbaf19074fc
	I1207 20:38:14.243080   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.243094   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.243104   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.243341   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"734","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1207 20:38:14.243781   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:14.243797   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.243808   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.243818   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.246008   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.246024   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.246030   33734 round_trippers.go:580]     Audit-Id: 14c0656a-71f4-4313-8948-437075ad590a
	I1207 20:38:14.246035   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.246040   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.246044   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.246050   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.246055   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.246376   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:14.246739   33734 pod_ready.go:97] node "multinode-660958" hosting pod "etcd-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.246761   33734 pod_ready.go:81] duration metric: took 6.198835ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:14.246772   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "etcd-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.246790   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.246856   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:38:14.246867   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.246878   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.246887   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.248992   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.249008   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.249014   33734 round_trippers.go:580]     Audit-Id: 47bcedd3-1ad1-408e-9596-10ca9bb3b72a
	I1207 20:38:14.249022   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.249033   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.249045   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.249051   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.249056   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.249407   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"748","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1207 20:38:14.249801   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:14.249814   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.249821   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.249827   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.251958   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.251975   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.251984   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.251993   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.252002   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.252009   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.252021   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.252026   33734 round_trippers.go:580]     Audit-Id: d835b295-7a8b-4684-ad56-68d0a6011773
	I1207 20:38:14.252273   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:14.252553   33734 pod_ready.go:97] node "multinode-660958" hosting pod "kube-apiserver-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.252568   33734 pod_ready.go:81] duration metric: took 5.76716ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:14.252575   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "kube-apiserver-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.252581   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.252636   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:38:14.252644   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.252651   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.252657   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.255183   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.255194   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.255200   33734 round_trippers.go:580]     Audit-Id: 215a5486-7339-419a-a44a-30fab8cb5ba9
	I1207 20:38:14.255206   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.255211   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.255216   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.255222   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.255237   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.255500   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"751","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1207 20:38:14.379909   33734 request.go:629] Waited for 123.922793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:14.380021   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:14.380028   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.380038   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.380048   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.383110   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:14.383127   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.383134   33734 round_trippers.go:580]     Audit-Id: e4cf55ec-16db-4e39-a4e8-7611e673a950
	I1207 20:38:14.383140   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.383145   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.383155   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.383171   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.383182   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.383307   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:14.383634   33734 pod_ready.go:97] node "multinode-660958" hosting pod "kube-controller-manager-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.383655   33734 pod_ready.go:81] duration metric: took 131.06727ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:14.383667   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "kube-controller-manager-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:14.383681   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.579651   33734 request.go:629] Waited for 195.917386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:38:14.579731   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:38:14.579741   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.579748   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.579757   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.582808   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:14.582825   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.582832   33734 round_trippers.go:580]     Audit-Id: 77c9c2ec-7958-4546-b54d-35c459ce87cc
	I1207 20:38:14.582844   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.582855   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.582866   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.582874   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.582888   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.583162   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"696","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:38:14.779926   33734 request.go:629] Waited for 196.385069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:38:14.780011   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:38:14.780024   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.780048   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.780061   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.783303   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:14.783326   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.783334   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.783339   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.783345   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.783350   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.783362   33734 round_trippers.go:580]     Audit-Id: ee4ae04c-a0da-4601-a108-c171d44e27a0
	I1207 20:38:14.783370   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.783507   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"99d6ae8d-c617-438e-918b-4f4d3c4699de","resourceVersion":"803","creationTimestamp":"2023-12-07T20:30:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_30_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:30:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I1207 20:38:14.783785   33734 pod_ready.go:92] pod "kube-proxy-mjptg" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:14.783801   33734 pod_ready.go:81] duration metric: took 400.112594ms waiting for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.783810   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:14.979221   33734 request.go:629] Waited for 195.361726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:38:14.979291   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:38:14.979298   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:14.979312   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:14.979324   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:14.982315   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:14.982338   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:14.982347   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:14.982356   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:14.982363   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:14.982373   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:14.982381   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:14 GMT
	I1207 20:38:14.982393   33734 round_trippers.go:580]     Audit-Id: 71fcdf59-9779-4226-9d5d-9f87966b56e8
	I1207 20:38:14.982564   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"789","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:38:15.179345   33734 request.go:629] Waited for 196.350785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:15.179432   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:15.179448   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:15.179460   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:15.179475   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:15.182569   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:15.182588   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:15.182596   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:15.182604   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:15.182612   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:15.182620   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:15.182628   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:15.182638   33734 round_trippers.go:580]     Audit-Id: 0276401d-d20d-481b-97de-053b34016b4a
	I1207 20:38:15.182850   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:15.183287   33734 pod_ready.go:97] node "multinode-660958" hosting pod "kube-proxy-pfc45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:15.183316   33734 pod_ready.go:81] duration metric: took 399.497561ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:15.183333   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "kube-proxy-pfc45" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:15.183342   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:15.379793   33734 request.go:629] Waited for 196.371034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:38:15.379875   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:38:15.379888   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:15.379900   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:15.379911   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:15.385591   33734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1207 20:38:15.385616   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:15.385626   33734 round_trippers.go:580]     Audit-Id: 2d19e5e6-a231-47b4-af73-9f725ab1bd85
	I1207 20:38:15.385634   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:15.385641   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:15.385648   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:15.385656   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:15.385663   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:15.385982   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxqfp","generateName":"kube-proxy-","namespace":"kube-system","uid":"c06f17e2-4050-4554-8c4a-057bca0bb5ff","resourceVersion":"481","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:38:15.579766   33734 request.go:629] Waited for 193.379671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:38:15.579888   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:38:15.579921   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:15.579933   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:15.579947   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:15.582646   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:15.582676   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:15.582687   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:15.582696   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:15.582705   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:15.582729   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:15.582752   33734 round_trippers.go:580]     Audit-Id: d8e7d0f1-b22c-40af-98ce-81da697dee99
	I1207 20:38:15.582762   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:15.582910   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"721","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_30_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1207 20:38:15.583329   33734 pod_ready.go:92] pod "kube-proxy-rxqfp" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:15.583365   33734 pod_ready.go:81] duration metric: took 400.011479ms waiting for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:15.583384   33734 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:15.779793   33734 request.go:629] Waited for 196.334708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:38:15.779881   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:38:15.779894   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:15.779905   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:15.779919   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:15.782822   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:15.782843   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:15.782853   33734 round_trippers.go:580]     Audit-Id: 338dfdfc-7391-41a6-9a41-352e17fb747c
	I1207 20:38:15.782863   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:15.782870   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:15.782882   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:15.782894   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:15.782906   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:15.783020   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"739","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1207 20:38:15.979798   33734 request.go:629] Waited for 196.38415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:15.979867   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:15.979881   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:15.979905   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:15.979922   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:15.982563   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:15.982588   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:15.982597   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:15.982605   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:15.982614   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:15.982623   33734 round_trippers.go:580]     Audit-Id: 8392e6ed-2a80-4ac7-ad21-2ce5c03dcb48
	I1207 20:38:15.982640   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:15.982647   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:15.982889   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:15.983248   33734 pod_ready.go:97] node "multinode-660958" hosting pod "kube-scheduler-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:15.983274   33734 pod_ready.go:81] duration metric: took 399.877153ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	E1207 20:38:15.983283   33734 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-660958" hosting pod "kube-scheduler-multinode-660958" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-660958" has status "Ready":"False"
	I1207 20:38:15.983289   33734 pod_ready.go:38] duration metric: took 1.761265502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:38:15.983306   33734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:38:15.995856   33734 command_runner.go:130] > -16
	I1207 20:38:15.995887   33734 ops.go:34] apiserver oom_adj: -16
	I1207 20:38:15.995894   33734 kubeadm.go:640] restartCluster took 22.416006404s
	I1207 20:38:15.995901   33734 kubeadm.go:406] StartCluster complete in 22.462001471s
	I1207 20:38:15.995917   33734 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:38:15.995996   33734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:38:15.996701   33734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:38:15.996954   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:38:15.997086   33734 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:38:15.997243   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:38:15.997327   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:38:16.000252   33734 out.go:177] * Enabled addons: 
	I1207 20:38:15.997608   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:38:16.001827   33734 addons.go:502] enable addons completed in 4.755132ms: enabled=[]
	I1207 20:38:16.000525   33734 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:38:16.001855   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:16.001863   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:16.001869   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:16.004727   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:16.004743   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:16.004750   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:16.004756   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:16.004776   33734 round_trippers.go:580]     Content-Length: 291
	I1207 20:38:16.004784   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:15 GMT
	I1207 20:38:16.004789   33734 round_trippers.go:580]     Audit-Id: 00f5d43a-3391-4979-a38b-6e9d70f8f6c8
	I1207 20:38:16.004794   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:16.004799   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:16.004840   33734 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"811","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1207 20:38:16.004988   33734 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-660958" context rescaled to 1 replicas
	I1207 20:38:16.005017   33734 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:38:16.006793   33734 out.go:177] * Verifying Kubernetes components...
	I1207 20:38:16.008268   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:38:16.102479   33734 command_runner.go:130] > apiVersion: v1
	I1207 20:38:16.102504   33734 command_runner.go:130] > data:
	I1207 20:38:16.102511   33734 command_runner.go:130] >   Corefile: |
	I1207 20:38:16.102516   33734 command_runner.go:130] >     .:53 {
	I1207 20:38:16.102522   33734 command_runner.go:130] >         log
	I1207 20:38:16.102534   33734 command_runner.go:130] >         errors
	I1207 20:38:16.102543   33734 command_runner.go:130] >         health {
	I1207 20:38:16.102550   33734 command_runner.go:130] >            lameduck 5s
	I1207 20:38:16.102559   33734 command_runner.go:130] >         }
	I1207 20:38:16.102567   33734 command_runner.go:130] >         ready
	I1207 20:38:16.102578   33734 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1207 20:38:16.102584   33734 command_runner.go:130] >            pods insecure
	I1207 20:38:16.102592   33734 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1207 20:38:16.102596   33734 command_runner.go:130] >            ttl 30
	I1207 20:38:16.102600   33734 command_runner.go:130] >         }
	I1207 20:38:16.102608   33734 command_runner.go:130] >         prometheus :9153
	I1207 20:38:16.102612   33734 command_runner.go:130] >         hosts {
	I1207 20:38:16.102617   33734 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1207 20:38:16.102623   33734 command_runner.go:130] >            fallthrough
	I1207 20:38:16.102628   33734 command_runner.go:130] >         }
	I1207 20:38:16.102635   33734 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1207 20:38:16.102643   33734 command_runner.go:130] >            max_concurrent 1000
	I1207 20:38:16.102650   33734 command_runner.go:130] >         }
	I1207 20:38:16.102657   33734 command_runner.go:130] >         cache 30
	I1207 20:38:16.102667   33734 command_runner.go:130] >         loop
	I1207 20:38:16.102676   33734 command_runner.go:130] >         reload
	I1207 20:38:16.102686   33734 command_runner.go:130] >         loadbalance
	I1207 20:38:16.102693   33734 command_runner.go:130] >     }
	I1207 20:38:16.102700   33734 command_runner.go:130] > kind: ConfigMap
	I1207 20:38:16.102703   33734 command_runner.go:130] > metadata:
	I1207 20:38:16.102711   33734 command_runner.go:130] >   creationTimestamp: "2023-12-07T20:27:35Z"
	I1207 20:38:16.102715   33734 command_runner.go:130] >   name: coredns
	I1207 20:38:16.102719   33734 command_runner.go:130] >   namespace: kube-system
	I1207 20:38:16.102726   33734 command_runner.go:130] >   resourceVersion: "358"
	I1207 20:38:16.102735   33734 command_runner.go:130] >   uid: e7783337-00bf-41eb-a7bf-df63fd11f78e
	I1207 20:38:16.102835   33734 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 20:38:16.102831   33734 node_ready.go:35] waiting up to 6m0s for node "multinode-660958" to be "Ready" ...
	I1207 20:38:16.179150   33734 request.go:629] Waited for 76.222726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:16.179206   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:16.179211   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:16.179218   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:16.179224   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:16.182123   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:16.182147   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:16.182157   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:16 GMT
	I1207 20:38:16.182165   33734 round_trippers.go:580]     Audit-Id: 0f6b3398-8e92-46b7-88d0-2de6fa5cee31
	I1207 20:38:16.182172   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:16.182180   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:16.182187   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:16.182194   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:16.182393   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:16.379173   33734 request.go:629] Waited for 196.351538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:16.379227   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:16.379232   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:16.379239   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:16.379245   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:16.382308   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:16.382333   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:16.382343   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:16 GMT
	I1207 20:38:16.382350   33734 round_trippers.go:580]     Audit-Id: eae778ca-1fcb-4a50-90d2-7af08f16459d
	I1207 20:38:16.382385   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:16.382397   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:16.382409   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:16.382417   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:16.382621   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:16.883690   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:16.883715   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:16.883723   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:16.883729   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:16.886886   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:16.886916   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:16.886926   33734 round_trippers.go:580]     Audit-Id: da70eaa9-1bd4-441f-ac7a-3cdd5305c827
	I1207 20:38:16.886935   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:16.886942   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:16.886951   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:16.886959   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:16.886967   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:16 GMT
	I1207 20:38:16.887141   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:17.383876   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:17.383924   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:17.383933   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:17.383941   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:17.386772   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:17.386791   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:17.386798   33734 round_trippers.go:580]     Audit-Id: b54f9047-ec88-4b6d-a633-6e539cdf3ac3
	I1207 20:38:17.386804   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:17.386809   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:17.386814   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:17.386819   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:17.386830   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:17 GMT
	I1207 20:38:17.387211   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:17.883354   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:17.883398   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:17.883410   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:17.883420   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:17.886169   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:17.886192   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:17.886203   33734 round_trippers.go:580]     Audit-Id: e4e5ed87-cf94-4128-ab1d-514021d7f938
	I1207 20:38:17.886212   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:17.886220   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:17.886227   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:17.886236   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:17.886256   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:17 GMT
	I1207 20:38:17.886451   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:18.384204   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:18.384231   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:18.384239   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:18.384253   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:18.387109   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:18.387133   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:18.387144   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:18.387149   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:18.387154   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:18.387162   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:18 GMT
	I1207 20:38:18.387170   33734 round_trippers.go:580]     Audit-Id: 3824feb4-ef26-4929-9662-b9997492359c
	I1207 20:38:18.387178   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:18.387564   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:18.387861   33734 node_ready.go:58] node "multinode-660958" has status "Ready":"False"
	I1207 20:38:18.883181   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:18.883203   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:18.883212   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:18.883218   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:18.886040   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:18.886065   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:18.886075   33734 round_trippers.go:580]     Audit-Id: 900aef6c-ecc2-42b8-9de1-6ee8b0251e92
	I1207 20:38:18.886083   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:18.886088   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:18.886093   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:18.886098   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:18.886104   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:18 GMT
	I1207 20:38:18.886599   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:19.383716   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:19.383738   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:19.383746   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:19.383752   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:19.386377   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:19.386405   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:19.386415   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:19.386423   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:19.386431   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:19.386439   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:19 GMT
	I1207 20:38:19.386448   33734 round_trippers.go:580]     Audit-Id: 327a2c74-0161-4040-aae6-16919e92eb63
	I1207 20:38:19.386457   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:19.387047   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:19.883466   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:19.883492   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:19.883500   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:19.883506   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:19.886399   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:19.886444   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:19.886455   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:19.886471   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:19 GMT
	I1207 20:38:19.886479   33734 round_trippers.go:580]     Audit-Id: 900cc9f8-1822-4081-a0f7-6ef04bea4564
	I1207 20:38:19.886492   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:19.886501   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:19.886510   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:19.886679   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:20.383386   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:20.383424   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:20.383435   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:20.383443   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:20.386474   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:20.386502   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:20.386511   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:20 GMT
	I1207 20:38:20.386524   33734 round_trippers.go:580]     Audit-Id: 24de2159-1946-4619-8164-09b55dec9d4e
	I1207 20:38:20.386535   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:20.386544   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:20.386566   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:20.386576   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:20.386770   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:20.883438   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:20.883464   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:20.883472   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:20.883480   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:20.886046   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:20.886073   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:20.886081   33734 round_trippers.go:580]     Audit-Id: d74954cf-4b42-4ced-aac5-dfbf6c244d02
	I1207 20:38:20.886089   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:20.886096   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:20.886103   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:20.886112   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:20.886122   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:20 GMT
	I1207 20:38:20.886391   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:20.886690   33734 node_ready.go:58] node "multinode-660958" has status "Ready":"False"
	I1207 20:38:21.384159   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:21.384192   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.384200   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.384206   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.387020   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:21.387040   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.387046   33734 round_trippers.go:580]     Audit-Id: 1cd075fc-b7df-44dd-ace5-329365875288
	I1207 20:38:21.387053   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.387060   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.387068   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.387076   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.387089   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.387265   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"725","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1207 20:38:21.883901   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:21.883928   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.883936   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.883942   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.886577   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:21.886599   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.886608   33734 round_trippers.go:580]     Audit-Id: 0a21d65f-19ca-443c-a9ec-197ebec6b983
	I1207 20:38:21.886615   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.886620   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.886625   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.886630   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.886635   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.886967   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:21.887258   33734 node_ready.go:49] node "multinode-660958" has status "Ready":"True"
	I1207 20:38:21.887273   33734 node_ready.go:38] duration metric: took 5.784415916s waiting for node "multinode-660958" to be "Ready" ...
	I1207 20:38:21.887280   33734 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:38:21.887324   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:21.887332   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.887338   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.887344   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.890950   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:21.890968   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.890975   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.890980   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.890985   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.890990   33734 round_trippers.go:580]     Audit-Id: e815aa47-bfa0-468c-a304-b4b873c18b53
	I1207 20:38:21.890994   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.890999   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.892118   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"854"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82441 chars]
	I1207 20:38:21.894597   33734 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:21.894678   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:21.894688   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.894698   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.894709   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.897780   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:21.897798   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.897806   33734 round_trippers.go:580]     Audit-Id: b39a6d3d-f725-4122-8b85-c4772a00e9a4
	I1207 20:38:21.897813   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.897820   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.897831   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.897843   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.897850   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.899178   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:21.899696   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:21.899718   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.899729   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.899744   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.902255   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:21.902273   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.902284   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.902292   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.902300   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.902307   33734 round_trippers.go:580]     Audit-Id: 4e03687a-379a-4d9d-8ddc-4b05729922e9
	I1207 20:38:21.902317   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.902326   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.902499   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:21.902917   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:21.902931   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.902942   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.902952   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.907038   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:21.907057   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.907065   33734 round_trippers.go:580]     Audit-Id: 260ba43a-e0e8-4bec-9fbd-ebf9992745e0
	I1207 20:38:21.907073   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.907082   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.907092   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.907108   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.907120   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.907871   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:21.908258   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:21.908272   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:21.908280   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:21.908286   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:21.911950   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:21.911968   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:21.911976   33734 round_trippers.go:580]     Audit-Id: f8ebaf92-cd9f-4748-8f8a-4842df0ce136
	I1207 20:38:21.911984   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:21.911992   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:21.912001   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:21.912013   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:21.912021   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:21 GMT
	I1207 20:38:21.912422   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:22.413207   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:22.413234   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:22.413246   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:22.413255   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:22.415841   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:22.415867   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:22.415876   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:22.415884   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:22.415892   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:22.415899   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:22.415908   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:22 GMT
	I1207 20:38:22.415927   33734 round_trippers.go:580]     Audit-Id: 43e891b5-4846-4c23-812e-daa4f1b227cd
	I1207 20:38:22.416134   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:22.416575   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:22.416587   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:22.416595   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:22.416607   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:22.419276   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:22.419295   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:22.419305   33734 round_trippers.go:580]     Audit-Id: f26e8e27-6070-40df-aa67-4c4bbb04652d
	I1207 20:38:22.419313   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:22.419325   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:22.419333   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:22.419341   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:22.419353   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:22 GMT
	I1207 20:38:22.420047   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:22.913098   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:22.913122   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:22.913138   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:22.913144   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:22.916642   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:22.916672   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:22.916689   33734 round_trippers.go:580]     Audit-Id: e9ed6f34-0ab3-4f95-b0ac-a47e27a03a1e
	I1207 20:38:22.916698   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:22.916706   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:22.916713   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:22.916721   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:22.916729   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:22 GMT
	I1207 20:38:22.917417   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:22.917844   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:22.917858   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:22.917865   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:22.917871   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:22.920459   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:22.920483   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:22.920493   33734 round_trippers.go:580]     Audit-Id: 5cd2f351-7550-439f-a140-043f5f512f1a
	I1207 20:38:22.920500   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:22.920509   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:22.920517   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:22.920533   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:22.920542   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:22 GMT
	I1207 20:38:22.920667   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:23.413205   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:23.413245   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:23.413256   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:23.413265   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:23.416481   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:23.416502   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:23.416509   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:23.416515   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:23 GMT
	I1207 20:38:23.416521   33734 round_trippers.go:580]     Audit-Id: 6491cb14-46aa-4ede-89bd-0655e2c3eba7
	I1207 20:38:23.416529   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:23.416553   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:23.416566   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:23.416965   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:23.417395   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:23.417409   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:23.417416   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:23.417425   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:23.420234   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:23.420250   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:23.420256   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:23.420262   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:23 GMT
	I1207 20:38:23.420267   33734 round_trippers.go:580]     Audit-Id: 469daab2-ce18-427f-bf3a-580391d24985
	I1207 20:38:23.420277   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:23.420287   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:23.420292   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:23.420444   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:23.913009   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:23.913036   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:23.913047   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:23.913056   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:23.921958   33734 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1207 20:38:23.921983   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:23.921990   33734 round_trippers.go:580]     Audit-Id: 20ec2136-4695-4007-bc50-e4a7cba4981f
	I1207 20:38:23.922000   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:23.922005   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:23.922011   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:23.922030   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:23.922038   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:23 GMT
	I1207 20:38:23.922355   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:23.922814   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:23.922828   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:23.922835   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:23.922841   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:23.927636   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:23.927660   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:23.927668   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:23.927677   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:23.927685   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:23 GMT
	I1207 20:38:23.927693   33734 round_trippers.go:580]     Audit-Id: 46ba47c7-8f2c-445a-a06d-baceb382c7a5
	I1207 20:38:23.927700   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:23.927712   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:23.927834   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:23.928295   33734 pod_ready.go:102] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:38:24.413551   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:24.413575   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:24.413582   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:24.413589   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:24.418025   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:24.418050   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:24.418061   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:24.418069   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:24.418077   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:24 GMT
	I1207 20:38:24.418103   33734 round_trippers.go:580]     Audit-Id: 7c60aad7-7bed-464a-869c-c576a7090507
	I1207 20:38:24.418115   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:24.418123   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:24.418596   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:24.419006   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:24.419020   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:24.419028   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:24.419034   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:24.422316   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:24.422340   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:24.422349   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:24.422357   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:24.422364   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:24.422373   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:24 GMT
	I1207 20:38:24.422381   33734 round_trippers.go:580]     Audit-Id: 825da4d6-c93f-41c0-aae3-bfca4265f0a0
	I1207 20:38:24.422390   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:24.422631   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:24.913066   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:24.913089   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:24.913097   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:24.913103   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:24.916330   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:24.916351   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:24.916358   33734 round_trippers.go:580]     Audit-Id: 44b23d44-646e-4ee8-896d-f33fd9534039
	I1207 20:38:24.916367   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:24.916375   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:24.916382   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:24.916390   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:24.916404   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:24 GMT
	I1207 20:38:24.916599   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:24.917117   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:24.917133   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:24.917140   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:24.917146   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:24.919393   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:24.919408   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:24.919415   33734 round_trippers.go:580]     Audit-Id: fa3169d8-56a5-4ab0-8964-8c2af6f8e08b
	I1207 20:38:24.919422   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:24.919431   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:24.919439   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:24.919448   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:24.919456   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:24 GMT
	I1207 20:38:24.919708   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:25.413355   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:25.413380   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:25.413388   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:25.413396   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:25.416408   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:25.416451   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:25.416474   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:25.416483   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:25.416491   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:25.416499   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:25 GMT
	I1207 20:38:25.416507   33734 round_trippers.go:580]     Audit-Id: 103735db-51a3-4e08-81b9-a8235c9bb3cd
	I1207 20:38:25.416519   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:25.417057   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:25.417504   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:25.417525   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:25.417532   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:25.417540   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:25.419896   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:25.419915   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:25.419924   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:25.419931   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:25.419947   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:25 GMT
	I1207 20:38:25.419955   33734 round_trippers.go:580]     Audit-Id: f91e6b6f-5fba-46a9-bdf0-419b3b187228
	I1207 20:38:25.419962   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:25.419970   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:25.420302   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:25.914034   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:25.914060   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:25.914068   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:25.914074   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:25.917155   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:25.917171   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:25.917177   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:25 GMT
	I1207 20:38:25.917185   33734 round_trippers.go:580]     Audit-Id: 73b09f7f-8b4d-4269-aac8-10ee3599f73f
	I1207 20:38:25.917193   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:25.917202   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:25.917218   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:25.917224   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:25.917365   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:25.917808   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:25.917825   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:25.917832   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:25.917838   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:25.920062   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:25.920079   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:25.920085   33734 round_trippers.go:580]     Audit-Id: 80668d62-7c22-4c3a-8597-0b2c8d1ade02
	I1207 20:38:25.920091   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:25.920098   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:25.920106   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:25.920113   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:25.920121   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:25 GMT
	I1207 20:38:25.920971   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:26.413651   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:26.413674   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:26.413683   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:26.413690   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:26.418132   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:26.418152   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:26.418166   33734 round_trippers.go:580]     Audit-Id: e8fd947c-9d9b-4e17-afee-a49ae9597c38
	I1207 20:38:26.418175   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:26.418183   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:26.418192   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:26.418201   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:26.418212   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:26 GMT
	I1207 20:38:26.419063   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:26.419592   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:26.419609   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:26.419619   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:26.419630   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:26.423217   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:26.423234   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:26.423241   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:26 GMT
	I1207 20:38:26.423249   33734 round_trippers.go:580]     Audit-Id: 686e494e-52c5-4eac-937d-029fb3bd89d8
	I1207 20:38:26.423254   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:26.423266   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:26.423273   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:26.423279   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:26.423966   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:26.424259   33734 pod_ready.go:102] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"False"
	I1207 20:38:26.913632   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:26.913654   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:26.913662   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:26.913668   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:26.916392   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:26.916413   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:26.916422   33734 round_trippers.go:580]     Audit-Id: 92a3b2ee-5a32-4b4b-99bb-f247961af527
	I1207 20:38:26.916431   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:26.916440   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:26.916452   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:26.916464   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:26.916475   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:26 GMT
	I1207 20:38:26.917054   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:26.917568   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:26.917585   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:26.917603   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:26.917617   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:26.919865   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:26.919877   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:26.919883   33734 round_trippers.go:580]     Audit-Id: 98827822-e741-48c6-8da8-15e4739b2fcc
	I1207 20:38:26.919889   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:26.919897   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:26.919903   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:26.919918   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:26.919929   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:26 GMT
	I1207 20:38:26.920045   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:27.413723   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:27.413750   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:27.413762   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:27.413770   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:27.416644   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:27.416664   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:27.416671   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:27.416676   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:27.416681   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:27.416687   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:27.416693   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:27 GMT
	I1207 20:38:27.416705   33734 round_trippers.go:580]     Audit-Id: 3849cd1e-6b22-4c41-99c6-452f997034a0
	I1207 20:38:27.416914   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:27.417497   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:27.417515   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:27.417525   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:27.417536   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:27.422677   33734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1207 20:38:27.422697   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:27.422706   33734 round_trippers.go:580]     Audit-Id: 3cdc59d7-fef1-499b-9279-3ce43c54d4fc
	I1207 20:38:27.422714   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:27.422722   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:27.422731   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:27.422744   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:27.422755   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:27 GMT
	I1207 20:38:27.422932   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:27.913980   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:27.913998   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:27.914006   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:27.914014   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:27.916820   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:27.916838   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:27.916847   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:27.916855   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:27.916878   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:27 GMT
	I1207 20:38:27.916891   33734 round_trippers.go:580]     Audit-Id: 7ad61b72-17e9-447f-a284-c7c6a72142f5
	I1207 20:38:27.916900   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:27.916909   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:27.917443   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:27.917902   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:27.917917   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:27.917941   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:27.917951   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:27.920218   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:27.920231   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:27.920237   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:27.920244   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:27.920252   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:27.920259   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:27.920271   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:27 GMT
	I1207 20:38:27.920281   33734 round_trippers.go:580]     Audit-Id: b185d92f-da23-4e72-9e8c-cace8a8ce346
	I1207 20:38:27.920376   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.413110   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:28.413136   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.413145   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.413157   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.416479   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:28.416500   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.416509   33734 round_trippers.go:580]     Audit-Id: 595bb809-1dbd-43f0-8642-5ad39660aeaa
	I1207 20:38:28.416518   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.416525   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.416532   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.416540   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.416552   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.416750   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"727","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1207 20:38:28.417489   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:28.417511   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.417524   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.417539   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.419797   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:28.419812   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.419825   33734 round_trippers.go:580]     Audit-Id: d22f0883-b723-4ca4-8a7c-699adb30dcdb
	I1207 20:38:28.419834   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.419847   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.419859   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.419872   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.419884   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.420008   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.913758   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:38:28.913784   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.913795   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.913803   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.916781   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:28.916799   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.916806   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.916812   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.916817   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.916822   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.916827   33734 round_trippers.go:580]     Audit-Id: 357b48ad-501d-4ade-bccd-d1d70a4e394d
	I1207 20:38:28.916835   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.917037   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1207 20:38:28.917460   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:28.917480   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.917491   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.917499   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.919617   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:28.919635   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.919645   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.919654   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.919663   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.919672   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.919677   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.919682   33734 round_trippers.go:580]     Audit-Id: 3aa6f051-2bde-4597-9aba-44f6b0ba7368
	I1207 20:38:28.919798   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.920094   33734 pod_ready.go:92] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:28.920112   33734 pod_ready.go:81] duration metric: took 7.025486468s waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.920123   33734 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.920176   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:38:28.920187   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.920197   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.920208   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.922447   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:28.922467   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.922478   33734 round_trippers.go:580]     Audit-Id: f1faa592-2fb3-4603-92f8-6260a974a4f0
	I1207 20:38:28.922495   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.922507   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.922516   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.922527   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.922541   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.922645   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"852","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1207 20:38:28.922970   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:28.922991   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.923001   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.923011   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.924636   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.924654   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.924664   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.924672   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.924686   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.924699   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.924709   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.924721   33734 round_trippers.go:580]     Audit-Id: e618698e-40b7-423e-9ff8-4e97169594b8
	I1207 20:38:28.924844   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.925182   33734 pod_ready.go:92] pod "etcd-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:28.925203   33734 pod_ready.go:81] duration metric: took 5.072236ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.925225   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.925285   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:38:28.925296   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.925306   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.925319   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.927253   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.927267   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.927275   33734 round_trippers.go:580]     Audit-Id: e435f4dc-04c3-4828-98af-dab4f13efa92
	I1207 20:38:28.927283   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.927291   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.927301   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.927316   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.927329   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.927500   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"856","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1207 20:38:28.927932   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:28.927948   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.927958   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.927964   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.930427   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:38:28.930440   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.930449   33734 round_trippers.go:580]     Audit-Id: 89395cfe-b801-47b3-87ed-f2ea7acc89b1
	I1207 20:38:28.930457   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.930466   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.930480   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.930489   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.930501   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.930724   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.930983   33734 pod_ready.go:92] pod "kube-apiserver-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:28.930998   33734 pod_ready.go:81] duration metric: took 5.760523ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.931020   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.931070   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:38:28.931079   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.931089   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.931098   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.932922   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.932937   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.932943   33734 round_trippers.go:580]     Audit-Id: e183afc1-7d69-4c85-ae1a-e4faa0ee7e49
	I1207 20:38:28.932951   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.932959   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.932975   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.932988   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.932993   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.933258   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"871","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1207 20:38:28.933595   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:28.933608   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.933618   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.933626   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.935429   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.935454   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.935464   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.935479   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.935495   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.935504   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.935515   33734 round_trippers.go:580]     Audit-Id: def315d1-5a87-4921-a9f6-1952fc05b104
	I1207 20:38:28.935524   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.935680   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:28.936033   33734 pod_ready.go:92] pod "kube-controller-manager-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:28.936048   33734 pod_ready.go:81] duration metric: took 5.0197ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.936059   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.936107   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:38:28.936117   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.936128   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.936140   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.937875   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.937887   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.937901   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.937910   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.937938   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.937949   33734 round_trippers.go:580]     Audit-Id: f081cb78-28f1-414f-a38a-a8e226043c62
	I1207 20:38:28.937961   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.937971   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.938208   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"696","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:38:28.938578   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:38:28.938591   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:28.938598   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:28.938604   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:28.940374   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:28.940387   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:28.940395   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:28.940403   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:28 GMT
	I1207 20:38:28.940411   33734 round_trippers.go:580]     Audit-Id: 01dfd387-25ff-4d8b-9ea2-108063f1e144
	I1207 20:38:28.940420   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:28.940429   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:28.940438   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:28.940560   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"99d6ae8d-c617-438e-918b-4f4d3c4699de","resourceVersion":"803","creationTimestamp":"2023-12-07T20:30:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_30_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:30:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I1207 20:38:28.940851   33734 pod_ready.go:92] pod "kube-proxy-mjptg" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:28.940868   33734 pod_ready.go:81] duration metric: took 4.799807ms waiting for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:28.940881   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:29.114257   33734 request.go:629] Waited for 173.322956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:38:29.114346   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:38:29.114356   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:29.114369   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:29.114391   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:29.117856   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:29.117887   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:29.117896   33734 round_trippers.go:580]     Audit-Id: e6e2e4fe-3921-40be-8643-b91e39876f9e
	I1207 20:38:29.117903   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:29.117911   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:29.117919   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:29.117944   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:29.117957   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:29 GMT
	I1207 20:38:29.118265   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"789","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:38:29.314646   33734 request.go:629] Waited for 195.90369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:29.314718   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:29.314726   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:29.314736   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:29.314762   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:29.317944   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:29.317968   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:29.317977   33734 round_trippers.go:580]     Audit-Id: 120002c8-a6c5-473b-a027-d272c428b787
	I1207 20:38:29.317984   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:29.317991   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:29.318000   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:29.318009   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:29.318018   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:29 GMT
	I1207 20:38:29.318193   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:29.318519   33734 pod_ready.go:92] pod "kube-proxy-pfc45" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:29.318538   33734 pod_ready.go:81] duration metric: took 377.649388ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:29.318550   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:29.513758   33734 request.go:629] Waited for 195.122069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:38:29.513809   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:38:29.513814   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:29.513821   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:29.513827   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:29.517022   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:29.517053   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:29.517061   33734 round_trippers.go:580]     Audit-Id: 73726174-7a85-4845-8d61-f8985ac1199b
	I1207 20:38:29.517070   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:29.517078   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:29.517091   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:29.517101   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:29.517112   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:29 GMT
	I1207 20:38:29.517344   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxqfp","generateName":"kube-proxy-","namespace":"kube-system","uid":"c06f17e2-4050-4554-8c4a-057bca0bb5ff","resourceVersion":"481","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:38:29.714276   33734 request.go:629] Waited for 196.383028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:38:29.714329   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:38:29.714334   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:29.714342   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:29.714348   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:29.718826   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:29.718856   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:29.718874   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:29.718885   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:29.718893   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:29.718903   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:29 GMT
	I1207 20:38:29.718913   33734 round_trippers.go:580]     Audit-Id: 07159f23-c19d-47fd-9a9e-d3d6b47daea5
	I1207 20:38:29.718926   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:29.719959   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712","resourceVersion":"721","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_30_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1207 20:38:29.720235   33734 pod_ready.go:92] pod "kube-proxy-rxqfp" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:29.720250   33734 pod_ready.go:81] duration metric: took 401.694165ms waiting for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:29.720260   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:29.914730   33734 request.go:629] Waited for 194.414323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:38:29.914797   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:38:29.914803   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:29.914810   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:29.914816   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:29.918695   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:29.918725   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:29.918736   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:29.918744   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:29.918771   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:29.918781   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:29 GMT
	I1207 20:38:29.918794   33734 round_trippers.go:580]     Audit-Id: b42225e2-9917-46cb-b0d7-c2516e35a54a
	I1207 20:38:29.918806   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:29.918952   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"849","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1207 20:38:30.114706   33734 request.go:629] Waited for 195.355164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:30.114764   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:38:30.114769   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.114776   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.114782   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.117809   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:30.117829   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.117836   33734 round_trippers.go:580]     Audit-Id: 5446f9c3-68ed-418e-913f-949f19790aa2
	I1207 20:38:30.117844   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.117849   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.117857   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.117862   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.117868   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.118218   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1207 20:38:30.118533   33734 pod_ready.go:92] pod "kube-scheduler-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:38:30.118573   33734 pod_ready.go:81] duration metric: took 398.305811ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:38:30.118583   33734 pod_ready.go:38] duration metric: took 8.231295239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:38:30.118597   33734 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:38:30.118643   33734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:38:30.134194   33734 command_runner.go:130] > 1095
	I1207 20:38:30.134244   33734 api_server.go:72] duration metric: took 14.129201263s to wait for apiserver process to appear ...
	I1207 20:38:30.134260   33734 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:38:30.134282   33734 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:38:30.139336   33734 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1207 20:38:30.139404   33734 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1207 20:38:30.139412   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.139420   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.139426   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.140645   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:38:30.140668   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.140678   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.140686   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.140691   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.140696   33734 round_trippers.go:580]     Content-Length: 264
	I1207 20:38:30.140703   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.140711   33734 round_trippers.go:580]     Audit-Id: 50bc2351-3a33-41e5-931a-64977e7de1e5
	I1207 20:38:30.140725   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.140746   33734 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1207 20:38:30.140791   33734 api_server.go:141] control plane version: v1.28.4
	I1207 20:38:30.140807   33734 api_server.go:131] duration metric: took 6.540165ms to wait for apiserver health ...
	I1207 20:38:30.140813   33734 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:38:30.314240   33734 request.go:629] Waited for 173.361474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:30.314303   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:30.314308   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.314315   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.314322   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.318878   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:30.318905   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.318913   33734 round_trippers.go:580]     Audit-Id: 23640090-7f15-47bd-896d-ee8ae885648e
	I1207 20:38:30.318918   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.318923   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.318928   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.318933   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.318938   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.320420   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"883"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81830 chars]
	I1207 20:38:30.323240   33734 system_pods.go:59] 12 kube-system pods found
	I1207 20:38:30.323262   33734 system_pods.go:61] "coredns-5dd5756b68-7mss7" [6d6632ea-9aae-43e7-8b17-56399870082b] Running
	I1207 20:38:30.323267   33734 system_pods.go:61] "etcd-multinode-660958" [997363d1-ef51-46b9-98ad-276aa803f3a8] Running
	I1207 20:38:30.323274   33734 system_pods.go:61] "kindnet-6flr5" [efdf3123-c2fd-4176-a308-0f104695b591] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 20:38:30.323281   33734 system_pods.go:61] "kindnet-d764j" [d1d942b5-9598-4a7d-bd1e-a283e096451c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 20:38:30.323285   33734 system_pods.go:61] "kindnet-jpfqs" [158552a2-294c-4d08-81de-05b1daf7dfe1] Running
	I1207 20:38:30.323293   33734 system_pods.go:61] "kube-apiserver-multinode-660958" [ab5b9260-db2a-4625-aff0-8b0fcf6a74a8] Running
	I1207 20:38:30.323306   33734 system_pods.go:61] "kube-controller-manager-multinode-660958" [fb58a1b4-61c1-41c6-b3af-824cc7a08c14] Running
	I1207 20:38:30.323309   33734 system_pods.go:61] "kube-proxy-mjptg" [1f4f9d19-e657-4472-a434-2e0810ba6cf3] Running
	I1207 20:38:30.323313   33734 system_pods.go:61] "kube-proxy-pfc45" [1e39fc15-3b2e-418c-92f1-32570e3bd853] Running
	I1207 20:38:30.323317   33734 system_pods.go:61] "kube-proxy-rxqfp" [c06f17e2-4050-4554-8c4a-057bca0bb5ff] Running
	I1207 20:38:30.323323   33734 system_pods.go:61] "kube-scheduler-multinode-660958" [ff5eb685-6086-4a98-b3b9-a485746dcbd4] Running
	I1207 20:38:30.323328   33734 system_pods.go:61] "storage-provisioner" [48bcf9dc-632d-4f04-9f6a-04d31cef5d88] Running
	I1207 20:38:30.323337   33734 system_pods.go:74] duration metric: took 182.5181ms to wait for pod list to return data ...
	I1207 20:38:30.323345   33734 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:38:30.514796   33734 request.go:629] Waited for 191.369425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:38:30.514868   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:38:30.514876   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.514887   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.514896   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.518061   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:30.518083   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.518090   33734 round_trippers.go:580]     Content-Length: 261
	I1207 20:38:30.518096   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.518101   33734 round_trippers.go:580]     Audit-Id: 0135eae8-97e7-4be7-a47c-ec5a3a04110d
	I1207 20:38:30.518106   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.518112   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.518117   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.518124   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.518151   33734 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"883"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e1c756f1-a1dc-42bb-91cf-feb818e20257","resourceVersion":"306","creationTimestamp":"2023-12-07T20:27:47Z"}}]}
	I1207 20:38:30.518320   33734 default_sa.go:45] found service account: "default"
	I1207 20:38:30.518337   33734 default_sa.go:55] duration metric: took 194.987746ms for default service account to be created ...
	I1207 20:38:30.518344   33734 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:38:30.714802   33734 request.go:629] Waited for 196.392678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:30.714881   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:38:30.714892   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.714903   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.714914   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.719185   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:38:30.719205   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.719211   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.719217   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.719222   33734 round_trippers.go:580]     Audit-Id: 32cfbc81-90f2-4b33-a209-ad9fc3ab4d36
	I1207 20:38:30.719230   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.719238   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.719247   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.721003   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"883"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81830 chars]
	I1207 20:38:30.723429   33734 system_pods.go:86] 12 kube-system pods found
	I1207 20:38:30.723450   33734 system_pods.go:89] "coredns-5dd5756b68-7mss7" [6d6632ea-9aae-43e7-8b17-56399870082b] Running
	I1207 20:38:30.723454   33734 system_pods.go:89] "etcd-multinode-660958" [997363d1-ef51-46b9-98ad-276aa803f3a8] Running
	I1207 20:38:30.723460   33734 system_pods.go:89] "kindnet-6flr5" [efdf3123-c2fd-4176-a308-0f104695b591] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 20:38:30.723468   33734 system_pods.go:89] "kindnet-d764j" [d1d942b5-9598-4a7d-bd1e-a283e096451c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 20:38:30.723483   33734 system_pods.go:89] "kindnet-jpfqs" [158552a2-294c-4d08-81de-05b1daf7dfe1] Running
	I1207 20:38:30.723490   33734 system_pods.go:89] "kube-apiserver-multinode-660958" [ab5b9260-db2a-4625-aff0-8b0fcf6a74a8] Running
	I1207 20:38:30.723499   33734 system_pods.go:89] "kube-controller-manager-multinode-660958" [fb58a1b4-61c1-41c6-b3af-824cc7a08c14] Running
	I1207 20:38:30.723506   33734 system_pods.go:89] "kube-proxy-mjptg" [1f4f9d19-e657-4472-a434-2e0810ba6cf3] Running
	I1207 20:38:30.723524   33734 system_pods.go:89] "kube-proxy-pfc45" [1e39fc15-3b2e-418c-92f1-32570e3bd853] Running
	I1207 20:38:30.723528   33734 system_pods.go:89] "kube-proxy-rxqfp" [c06f17e2-4050-4554-8c4a-057bca0bb5ff] Running
	I1207 20:38:30.723535   33734 system_pods.go:89] "kube-scheduler-multinode-660958" [ff5eb685-6086-4a98-b3b9-a485746dcbd4] Running
	I1207 20:38:30.723539   33734 system_pods.go:89] "storage-provisioner" [48bcf9dc-632d-4f04-9f6a-04d31cef5d88] Running
	I1207 20:38:30.723544   33734 system_pods.go:126] duration metric: took 205.196576ms to wait for k8s-apps to be running ...
	I1207 20:38:30.723553   33734 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:38:30.723600   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:38:30.749862   33734 system_svc.go:56] duration metric: took 26.301161ms WaitForService to wait for kubelet.
	I1207 20:38:30.749892   33734 kubeadm.go:581] duration metric: took 14.744850391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:38:30.749909   33734 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:38:30.914307   33734 request.go:629] Waited for 164.324556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1207 20:38:30.914369   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:38:30.914374   33734 round_trippers.go:469] Request Headers:
	I1207 20:38:30.914381   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:38:30.914387   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:38:30.917875   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:38:30.917895   33734 round_trippers.go:577] Response Headers:
	I1207 20:38:30.917901   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:38:30.917907   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:38:30 GMT
	I1207 20:38:30.917913   33734 round_trippers.go:580]     Audit-Id: 16e003c3-970d-4efd-b092-3f231ca168e9
	I1207 20:38:30.917918   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:38:30.917934   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:38:30.917943   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:38:30.918147   33734 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"884"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"853","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16177 chars]
	I1207 20:38:30.918783   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:30.918804   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:30.918814   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:30.918817   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:30.918821   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:38:30.918825   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:38:30.918828   33734 node_conditions.go:105] duration metric: took 168.915961ms to run NodePressure ...
	I1207 20:38:30.918837   33734 start.go:228] waiting for startup goroutines ...
	I1207 20:38:30.918843   33734 start.go:233] waiting for cluster config update ...
	I1207 20:38:30.918849   33734 start.go:242] writing updated cluster config ...
	I1207 20:38:30.919259   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:38:30.919334   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:38:30.922349   33734 out.go:177] * Starting worker node multinode-660958-m02 in cluster multinode-660958
	I1207 20:38:30.923802   33734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:38:30.923822   33734 cache.go:56] Caching tarball of preloaded images
	I1207 20:38:30.923917   33734 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:38:30.923933   33734 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:38:30.924059   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:38:30.924221   33734 start.go:365] acquiring machines lock for multinode-660958-m02: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:38:30.924262   33734 start.go:369] acquired machines lock for "multinode-660958-m02" in 24.77µs
	I1207 20:38:30.924275   33734 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:38:30.924280   33734 fix.go:54] fixHost starting: m02
	I1207 20:38:30.924522   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:38:30.924551   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:38:30.938704   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I1207 20:38:30.939122   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:38:30.939595   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:38:30.939615   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:38:30.939942   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:38:30.940162   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:38:30.940330   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetState
	I1207 20:38:30.942039   33734 fix.go:102] recreateIfNeeded on multinode-660958-m02: state=Running err=<nil>
	W1207 20:38:30.942058   33734 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:38:30.944015   33734 out.go:177] * Updating the running kvm2 "multinode-660958-m02" VM ...
	I1207 20:38:30.945485   33734 machine.go:88] provisioning docker machine ...
	I1207 20:38:30.945505   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:38:30.945687   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:38:30.945863   33734 buildroot.go:166] provisioning hostname "multinode-660958-m02"
	I1207 20:38:30.945891   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:38:30.946045   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:38:30.948508   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:30.948917   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:30.948946   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:30.949150   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:38:30.949307   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:30.949461   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:30.949618   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:38:30.949761   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:38:30.950088   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:38:30.950102   33734 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958-m02 && echo "multinode-660958-m02" | sudo tee /etc/hostname
	I1207 20:38:31.081578   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-660958-m02
	
	I1207 20:38:31.081614   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:38:31.084147   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.084494   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:31.084516   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.084713   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:38:31.084873   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:31.085033   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:31.085182   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:38:31.085337   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:38:31.085636   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:38:31.085660   33734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-660958-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-660958-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-660958-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:38:31.199182   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:38:31.199210   33734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:38:31.199228   33734 buildroot.go:174] setting up certificates
	I1207 20:38:31.199239   33734 provision.go:83] configureAuth start
	I1207 20:38:31.199250   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetMachineName
	I1207 20:38:31.199528   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:38:31.202135   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.202528   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:31.202570   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.202674   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:38:31.204711   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.205015   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:31.205042   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.205158   33734 provision.go:138] copyHostCerts
	I1207 20:38:31.205192   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:38:31.205228   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:38:31.205239   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:38:31.205319   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:38:31.205486   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:38:31.205544   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:38:31.205556   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:38:31.205636   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:38:31.205717   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:38:31.205741   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:38:31.205749   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:38:31.205783   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:38:31.205848   33734 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.multinode-660958-m02 san=[192.168.39.69 192.168.39.69 localhost 127.0.0.1 minikube multinode-660958-m02]
	I1207 20:38:31.314969   33734 provision.go:172] copyRemoteCerts
	I1207 20:38:31.315024   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:38:31.315049   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:38:31.317625   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.317983   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:31.318010   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.318157   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:38:31.318364   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:31.318510   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:38:31.318636   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:38:31.404943   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:38:31.405004   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:38:31.427503   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:38:31.427587   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1207 20:38:31.451924   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:38:31.451990   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:38:31.475425   33734 provision.go:86] duration metric: configureAuth took 276.171353ms
	I1207 20:38:31.475457   33734 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:38:31.475736   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:38:31.475821   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:38:31.478571   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.479009   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:38:31.479030   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:38:31.479251   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:38:31.479444   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:31.479583   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:38:31.479686   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:38:31.479860   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:38:31.480174   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:38:31.480190   33734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:40:02.034993   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:40:02.035017   33734 machine.go:91] provisioned docker machine in 1m31.089517956s
	I1207 20:40:02.035029   33734 start.go:300] post-start starting for "multinode-660958-m02" (driver="kvm2")
	I1207 20:40:02.035044   33734 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:40:02.035062   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:40:02.035378   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:40:02.035427   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:40:02.038379   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.038729   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:02.038755   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.038937   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:40:02.039122   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:40:02.039258   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:40:02.039414   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:40:02.134132   33734 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:40:02.138402   33734 command_runner.go:130] > NAME=Buildroot
	I1207 20:40:02.138420   33734 command_runner.go:130] > VERSION=2021.02.12-1-ge2b7375-dirty
	I1207 20:40:02.138425   33734 command_runner.go:130] > ID=buildroot
	I1207 20:40:02.138432   33734 command_runner.go:130] > VERSION_ID=2021.02.12
	I1207 20:40:02.138436   33734 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1207 20:40:02.138461   33734 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:40:02.138478   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:40:02.138544   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:40:02.138635   33734 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:40:02.138648   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:40:02.138753   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:40:02.147818   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:40:02.171766   33734 start.go:303] post-start completed in 136.721553ms
	I1207 20:40:02.171786   33734 fix.go:56] fixHost completed within 1m31.247506371s
	I1207 20:40:02.171804   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:40:02.174248   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.174693   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:02.174718   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.174840   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:40:02.175028   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:40:02.175173   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:40:02.175271   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:40:02.175434   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:40:02.175726   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1207 20:40:02.175737   33734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:40:02.287064   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701981602.274404773
	
	I1207 20:40:02.287085   33734 fix.go:206] guest clock: 1701981602.274404773
	I1207 20:40:02.287095   33734 fix.go:219] Guest: 2023-12-07 20:40:02.274404773 +0000 UTC Remote: 2023-12-07 20:40:02.171790229 +0000 UTC m=+452.877511189 (delta=102.614544ms)
	I1207 20:40:02.287114   33734 fix.go:190] guest clock delta is within tolerance: 102.614544ms
	I1207 20:40:02.287120   33734 start.go:83] releasing machines lock for "multinode-660958-m02", held for 1m31.362849032s
	I1207 20:40:02.287148   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:40:02.287371   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:40:02.289669   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.290044   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:02.290065   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.292230   33734 out.go:177] * Found network options:
	I1207 20:40:02.293936   33734 out.go:177]   - NO_PROXY=192.168.39.19
	W1207 20:40:02.295312   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:40:02.295342   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:40:02.295932   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:40:02.296132   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:40:02.296244   33734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:40:02.296285   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	W1207 20:40:02.296356   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:40:02.296431   33734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:40:02.296453   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:40:02.298813   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.299082   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.299186   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:02.299213   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.299369   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:40:02.299530   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:02.299564   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:02.299536   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:40:02.299757   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:40:02.299757   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:40:02.299945   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:40:02.299989   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:40:02.300085   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:40:02.300201   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:40:02.544687   33734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1207 20:40:02.544793   33734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:40:02.550574   33734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1207 20:40:02.550750   33734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:40:02.550806   33734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:40:02.558863   33734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 20:40:02.558885   33734 start.go:475] detecting cgroup driver to use...
	I1207 20:40:02.558954   33734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:40:02.571839   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:40:02.584601   33734 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:40:02.584651   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:40:02.597996   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:40:02.610985   33734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:40:02.746639   33734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:40:02.866479   33734 docker.go:219] disabling docker service ...
	I1207 20:40:02.866535   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:40:02.881632   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:40:02.893648   33734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:40:03.019575   33734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:40:03.156699   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:40:03.168980   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:40:03.186793   33734 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1207 20:40:03.186860   33734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:40:03.186909   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:40:03.197592   33734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:40:03.197652   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:40:03.207966   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:40:03.217998   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:40:03.227710   33734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:40:03.238369   33734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:40:03.247016   33734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1207 20:40:03.247104   33734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:40:03.255827   33734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:40:03.375518   33734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:40:05.130269   33734 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.754718126s)
	I1207 20:40:05.130296   33734 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:40:05.130360   33734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:40:05.138016   33734 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1207 20:40:05.138040   33734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1207 20:40:05.138050   33734 command_runner.go:130] > Device: 16h/22d	Inode: 1214        Links: 1
	I1207 20:40:05.138060   33734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:40:05.138069   33734 command_runner.go:130] > Access: 2023-12-07 20:40:05.044676918 +0000
	I1207 20:40:05.138078   33734 command_runner.go:130] > Modify: 2023-12-07 20:40:05.044676918 +0000
	I1207 20:40:05.138086   33734 command_runner.go:130] > Change: 2023-12-07 20:40:05.044676918 +0000
	I1207 20:40:05.138092   33734 command_runner.go:130] >  Birth: -
	I1207 20:40:05.138679   33734 start.go:543] Will wait 60s for crictl version
	I1207 20:40:05.138728   33734 ssh_runner.go:195] Run: which crictl
	I1207 20:40:05.144365   33734 command_runner.go:130] > /usr/bin/crictl
	I1207 20:40:05.144679   33734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:40:05.189779   33734 command_runner.go:130] > Version:  0.1.0
	I1207 20:40:05.189808   33734 command_runner.go:130] > RuntimeName:  cri-o
	I1207 20:40:05.189838   33734 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1207 20:40:05.189848   33734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1207 20:40:05.191104   33734 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:40:05.191169   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:40:05.243118   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:40:05.243147   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:40:05.243157   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:40:05.243163   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:40:05.243171   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:40:05.243183   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:40:05.243190   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:40:05.243197   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:40:05.243205   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:40:05.243216   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:40:05.243224   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:40:05.243230   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:40:05.244581   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:40:05.287882   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:40:05.287904   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:40:05.287916   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:40:05.287923   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:40:05.287933   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:40:05.287944   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:40:05.287955   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:40:05.287965   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:40:05.287986   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:40:05.288002   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:40:05.288008   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:40:05.288019   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:40:05.292145   33734 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:40:05.293698   33734 out.go:177]   - env NO_PROXY=192.168.39.19
	I1207 20:40:05.295176   33734 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:40:05.297562   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:05.297865   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:40:05.297884   33734 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:40:05.298070   33734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:40:05.302240   33734 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1207 20:40:05.302419   33734 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958 for IP: 192.168.39.69
	I1207 20:40:05.302449   33734 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:40:05.302607   33734 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:40:05.302660   33734 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:40:05.302672   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:40:05.302686   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:40:05.302699   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:40:05.302710   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:40:05.302772   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:40:05.302812   33734 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:40:05.302827   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:40:05.302859   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:40:05.302892   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:40:05.302932   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:40:05.302987   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:40:05.303025   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:40:05.303046   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:40:05.303059   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:40:05.303420   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:40:05.326267   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:40:05.348747   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:40:05.371504   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:40:05.394006   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:40:05.418227   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:40:05.441826   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:40:05.463960   33734 ssh_runner.go:195] Run: openssl version
	I1207 20:40:05.469973   33734 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1207 20:40:05.470051   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:40:05.483225   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:40:05.488007   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:40:05.488263   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:40:05.488321   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:40:05.494260   33734 command_runner.go:130] > b5213941
	I1207 20:40:05.494324   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:40:05.505026   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:40:05.517043   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:40:05.521731   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:40:05.521825   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:40:05.521869   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:40:05.527396   33734 command_runner.go:130] > 51391683
	I1207 20:40:05.527761   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:40:05.538466   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:40:05.550574   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:40:05.555615   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:40:05.555821   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:40:05.555861   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:40:05.561618   33734 command_runner.go:130] > 3ec20f2e
	I1207 20:40:05.561893   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:40:05.572201   33734 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:40:05.576371   33734 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:40:05.576681   33734 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:40:05.576769   33734 ssh_runner.go:195] Run: crio config
	I1207 20:40:05.635067   33734 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1207 20:40:05.635093   33734 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1207 20:40:05.635099   33734 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1207 20:40:05.635103   33734 command_runner.go:130] > #
	I1207 20:40:05.635110   33734 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1207 20:40:05.635116   33734 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1207 20:40:05.635122   33734 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1207 20:40:05.635132   33734 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1207 20:40:05.635139   33734 command_runner.go:130] > # reload'.
	I1207 20:40:05.635145   33734 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1207 20:40:05.635152   33734 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1207 20:40:05.635158   33734 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1207 20:40:05.635165   33734 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1207 20:40:05.635169   33734 command_runner.go:130] > [crio]
	I1207 20:40:05.635176   33734 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1207 20:40:05.635182   33734 command_runner.go:130] > # containers images, in this directory.
	I1207 20:40:05.635191   33734 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1207 20:40:05.635199   33734 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1207 20:40:05.635206   33734 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1207 20:40:05.635216   33734 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1207 20:40:05.635230   33734 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1207 20:40:05.635263   33734 command_runner.go:130] > storage_driver = "overlay"
	I1207 20:40:05.635274   33734 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1207 20:40:05.635280   33734 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1207 20:40:05.635285   33734 command_runner.go:130] > storage_option = [
	I1207 20:40:05.635294   33734 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1207 20:40:05.635305   33734 command_runner.go:130] > ]
	I1207 20:40:05.635321   33734 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1207 20:40:05.635332   33734 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1207 20:40:05.635345   33734 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1207 20:40:05.635355   33734 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1207 20:40:05.635363   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1207 20:40:05.635371   33734 command_runner.go:130] > # always happen on a node reboot
	I1207 20:40:05.635499   33734 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1207 20:40:05.635520   33734 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1207 20:40:05.635531   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1207 20:40:05.635549   33734 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1207 20:40:05.635560   33734 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1207 20:40:05.635575   33734 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1207 20:40:05.635592   33734 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1207 20:40:05.635603   33734 command_runner.go:130] > # internal_wipe = true
	I1207 20:40:05.635612   33734 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1207 20:40:05.635626   33734 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1207 20:40:05.635639   33734 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1207 20:40:05.635648   33734 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1207 20:40:05.635662   33734 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1207 20:40:05.635679   33734 command_runner.go:130] > [crio.api]
	I1207 20:40:05.635700   33734 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1207 20:40:05.635712   33734 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1207 20:40:05.635725   33734 command_runner.go:130] > # IP address on which the stream server will listen.
	I1207 20:40:05.635733   33734 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1207 20:40:05.635743   33734 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1207 20:40:05.635754   33734 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1207 20:40:05.635761   33734 command_runner.go:130] > # stream_port = "0"
	I1207 20:40:05.635771   33734 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1207 20:40:05.635777   33734 command_runner.go:130] > # stream_enable_tls = false
	I1207 20:40:05.635788   33734 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1207 20:40:05.635796   33734 command_runner.go:130] > # stream_idle_timeout = ""
	I1207 20:40:05.635810   33734 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1207 20:40:05.635821   33734 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1207 20:40:05.635830   33734 command_runner.go:130] > # minutes.
	I1207 20:40:05.635837   33734 command_runner.go:130] > # stream_tls_cert = ""
	I1207 20:40:05.635850   33734 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1207 20:40:05.635863   33734 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1207 20:40:05.635873   33734 command_runner.go:130] > # stream_tls_key = ""
	I1207 20:40:05.635882   33734 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1207 20:40:05.635895   33734 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1207 20:40:05.635905   33734 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1207 20:40:05.635950   33734 command_runner.go:130] > # stream_tls_ca = ""
	I1207 20:40:05.635966   33734 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:40:05.635976   33734 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1207 20:40:05.635985   33734 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:40:05.635996   33734 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1207 20:40:05.636018   33734 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1207 20:40:05.636030   33734 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1207 20:40:05.636037   33734 command_runner.go:130] > [crio.runtime]
	I1207 20:40:05.636050   33734 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1207 20:40:05.636062   33734 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1207 20:40:05.636074   33734 command_runner.go:130] > # "nofile=1024:2048"
	I1207 20:40:05.636088   33734 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1207 20:40:05.636098   33734 command_runner.go:130] > # default_ulimits = [
	I1207 20:40:05.636108   33734 command_runner.go:130] > # ]
	I1207 20:40:05.636122   33734 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1207 20:40:05.636132   33734 command_runner.go:130] > # no_pivot = false
	I1207 20:40:05.636143   33734 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1207 20:40:05.636153   33734 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1207 20:40:05.636164   33734 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1207 20:40:05.636177   33734 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1207 20:40:05.636190   33734 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1207 20:40:05.636206   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:40:05.636218   33734 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1207 20:40:05.636227   33734 command_runner.go:130] > # Cgroup setting for conmon
	I1207 20:40:05.636242   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1207 20:40:05.636252   33734 command_runner.go:130] > conmon_cgroup = "pod"
	I1207 20:40:05.636265   33734 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1207 20:40:05.636278   33734 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1207 20:40:05.636290   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:40:05.636300   33734 command_runner.go:130] > conmon_env = [
	I1207 20:40:05.636313   33734 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1207 20:40:05.636323   33734 command_runner.go:130] > ]
	I1207 20:40:05.636335   33734 command_runner.go:130] > # Additional environment variables to set for all the
	I1207 20:40:05.636346   33734 command_runner.go:130] > # containers. These are overridden if set in the
	I1207 20:40:05.636355   33734 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1207 20:40:05.636362   33734 command_runner.go:130] > # default_env = [
	I1207 20:40:05.636371   33734 command_runner.go:130] > # ]
	I1207 20:40:05.636382   33734 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1207 20:40:05.636393   33734 command_runner.go:130] > # selinux = false
	I1207 20:40:05.636406   33734 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1207 20:40:05.636419   33734 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1207 20:40:05.636432   33734 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1207 20:40:05.636443   33734 command_runner.go:130] > # seccomp_profile = ""
	I1207 20:40:05.636457   33734 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1207 20:40:05.636471   33734 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1207 20:40:05.636485   33734 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1207 20:40:05.636496   33734 command_runner.go:130] > # which might increase security.
	I1207 20:40:05.636507   33734 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1207 20:40:05.636521   33734 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1207 20:40:05.636535   33734 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1207 20:40:05.636543   33734 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1207 20:40:05.636556   33734 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1207 20:40:05.636569   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:40:05.636581   33734 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1207 20:40:05.636593   33734 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1207 20:40:05.636604   33734 command_runner.go:130] > # the cgroup blockio controller.
	I1207 20:40:05.636623   33734 command_runner.go:130] > # blockio_config_file = ""
	I1207 20:40:05.636635   33734 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1207 20:40:05.636641   33734 command_runner.go:130] > # irqbalance daemon.
	I1207 20:40:05.636650   33734 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1207 20:40:05.636665   33734 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1207 20:40:05.636677   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:40:05.636688   33734 command_runner.go:130] > # rdt_config_file = ""
	I1207 20:40:05.636700   33734 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1207 20:40:05.636710   33734 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1207 20:40:05.636723   33734 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1207 20:40:05.636754   33734 command_runner.go:130] > # separate_pull_cgroup = ""
	I1207 20:40:05.636769   33734 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1207 20:40:05.636783   33734 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1207 20:40:05.636793   33734 command_runner.go:130] > # will be added.
	I1207 20:40:05.636803   33734 command_runner.go:130] > # default_capabilities = [
	I1207 20:40:05.636813   33734 command_runner.go:130] > # 	"CHOWN",
	I1207 20:40:05.636823   33734 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1207 20:40:05.636832   33734 command_runner.go:130] > # 	"FSETID",
	I1207 20:40:05.636838   33734 command_runner.go:130] > # 	"FOWNER",
	I1207 20:40:05.636844   33734 command_runner.go:130] > # 	"SETGID",
	I1207 20:40:05.636854   33734 command_runner.go:130] > # 	"SETUID",
	I1207 20:40:05.636865   33734 command_runner.go:130] > # 	"SETPCAP",
	I1207 20:40:05.636872   33734 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1207 20:40:05.636882   33734 command_runner.go:130] > # 	"KILL",
	I1207 20:40:05.636890   33734 command_runner.go:130] > # ]
	I1207 20:40:05.636903   33734 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1207 20:40:05.636917   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:40:05.636926   33734 command_runner.go:130] > # default_sysctls = [
	I1207 20:40:05.636947   33734 command_runner.go:130] > # ]
	I1207 20:40:05.636958   33734 command_runner.go:130] > # List of devices on the host that a
	I1207 20:40:05.636971   33734 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1207 20:40:05.636982   33734 command_runner.go:130] > # allowed_devices = [
	I1207 20:40:05.636991   33734 command_runner.go:130] > # 	"/dev/fuse",
	I1207 20:40:05.636997   33734 command_runner.go:130] > # ]
	I1207 20:40:05.637005   33734 command_runner.go:130] > # List of additional devices. specified as
	I1207 20:40:05.637021   33734 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1207 20:40:05.637034   33734 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1207 20:40:05.637059   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:40:05.637069   33734 command_runner.go:130] > # additional_devices = [
	I1207 20:40:05.637075   33734 command_runner.go:130] > # ]
	I1207 20:40:05.637087   33734 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1207 20:40:05.637094   33734 command_runner.go:130] > # cdi_spec_dirs = [
	I1207 20:40:05.637104   33734 command_runner.go:130] > # 	"/etc/cdi",
	I1207 20:40:05.637111   33734 command_runner.go:130] > # 	"/var/run/cdi",
	I1207 20:40:05.637122   33734 command_runner.go:130] > # ]
	I1207 20:40:05.637132   33734 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1207 20:40:05.637140   33734 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1207 20:40:05.637146   33734 command_runner.go:130] > # Defaults to false.
	I1207 20:40:05.637152   33734 command_runner.go:130] > # device_ownership_from_security_context = false
	I1207 20:40:05.637164   33734 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1207 20:40:05.637178   33734 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1207 20:40:05.637189   33734 command_runner.go:130] > # hooks_dir = [
	I1207 20:40:05.637202   33734 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1207 20:40:05.637212   33734 command_runner.go:130] > # ]
	I1207 20:40:05.637225   33734 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1207 20:40:05.637239   33734 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1207 20:40:05.637251   33734 command_runner.go:130] > # its default mounts from the following two files:
	I1207 20:40:05.637261   33734 command_runner.go:130] > #
	I1207 20:40:05.637274   33734 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1207 20:40:05.637288   33734 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1207 20:40:05.637302   33734 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1207 20:40:05.637310   33734 command_runner.go:130] > #
	I1207 20:40:05.637322   33734 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1207 20:40:05.637335   33734 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1207 20:40:05.637347   33734 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1207 20:40:05.637360   33734 command_runner.go:130] > #      only add mounts it finds in this file.
	I1207 20:40:05.637368   33734 command_runner.go:130] > #
	I1207 20:40:05.637377   33734 command_runner.go:130] > # default_mounts_file = ""
	I1207 20:40:05.637390   33734 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1207 20:40:05.637405   33734 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1207 20:40:05.637415   33734 command_runner.go:130] > pids_limit = 1024
	I1207 20:40:05.637429   33734 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1207 20:40:05.637441   33734 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1207 20:40:05.637456   33734 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1207 20:40:05.637473   33734 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1207 20:40:05.637484   33734 command_runner.go:130] > # log_size_max = -1
	I1207 20:40:05.637498   33734 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1207 20:40:05.637509   33734 command_runner.go:130] > # log_to_journald = false
	I1207 20:40:05.637520   33734 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1207 20:40:05.637532   33734 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1207 20:40:05.637544   33734 command_runner.go:130] > # Path to directory for container attach sockets.
	I1207 20:40:05.637557   33734 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1207 20:40:05.637570   33734 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1207 20:40:05.637580   33734 command_runner.go:130] > # bind_mount_prefix = ""
	I1207 20:40:05.637594   33734 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1207 20:40:05.637604   33734 command_runner.go:130] > # read_only = false
	I1207 20:40:05.637615   33734 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1207 20:40:05.637627   33734 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1207 20:40:05.637633   33734 command_runner.go:130] > # live configuration reload.
	I1207 20:40:05.637640   33734 command_runner.go:130] > # log_level = "info"
	I1207 20:40:05.637650   33734 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1207 20:40:05.637662   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:40:05.637672   33734 command_runner.go:130] > # log_filter = ""
	I1207 20:40:05.637683   33734 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1207 20:40:05.637695   33734 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1207 20:40:05.637706   33734 command_runner.go:130] > # separated by comma.
	I1207 20:40:05.637712   33734 command_runner.go:130] > # uid_mappings = ""
	I1207 20:40:05.637725   33734 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1207 20:40:05.637738   33734 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1207 20:40:05.637749   33734 command_runner.go:130] > # separated by comma.
	I1207 20:40:05.637759   33734 command_runner.go:130] > # gid_mappings = ""
	I1207 20:40:05.637771   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1207 20:40:05.637816   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:40:05.637830   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:40:05.637841   33734 command_runner.go:130] > # minimum_mappable_uid = -1
	I1207 20:40:05.637854   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1207 20:40:05.637866   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:40:05.637879   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:40:05.637890   33734 command_runner.go:130] > # minimum_mappable_gid = -1
	I1207 20:40:05.637903   33734 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1207 20:40:05.637915   33734 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1207 20:40:05.637949   33734 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1207 20:40:05.637961   33734 command_runner.go:130] > # ctr_stop_timeout = 30
	I1207 20:40:05.637974   33734 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1207 20:40:05.637987   33734 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1207 20:40:05.637997   33734 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1207 20:40:05.638007   33734 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1207 20:40:05.638017   33734 command_runner.go:130] > drop_infra_ctr = false
	I1207 20:40:05.638030   33734 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1207 20:40:05.638039   33734 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1207 20:40:05.638054   33734 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1207 20:40:05.638064   33734 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1207 20:40:05.638074   33734 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1207 20:40:05.638085   33734 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1207 20:40:05.638095   33734 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1207 20:40:05.638106   33734 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1207 20:40:05.638117   33734 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1207 20:40:05.638127   33734 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1207 20:40:05.638140   33734 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1207 20:40:05.638151   33734 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1207 20:40:05.638161   33734 command_runner.go:130] > # default_runtime = "runc"
	I1207 20:40:05.638173   33734 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1207 20:40:05.638188   33734 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1207 20:40:05.638206   33734 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1207 20:40:05.638215   33734 command_runner.go:130] > # creation as a file is not desired either.
	I1207 20:40:05.638237   33734 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1207 20:40:05.638249   33734 command_runner.go:130] > # the hostname is being managed dynamically.
	I1207 20:40:05.638261   33734 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1207 20:40:05.638270   33734 command_runner.go:130] > # ]
	I1207 20:40:05.638280   33734 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1207 20:40:05.638290   33734 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1207 20:40:05.638303   33734 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1207 20:40:05.638316   33734 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1207 20:40:05.638326   33734 command_runner.go:130] > #
	I1207 20:40:05.638335   33734 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1207 20:40:05.638347   33734 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1207 20:40:05.638357   33734 command_runner.go:130] > #  runtime_type = "oci"
	I1207 20:40:05.638365   33734 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1207 20:40:05.638375   33734 command_runner.go:130] > #  privileged_without_host_devices = false
	I1207 20:40:05.638385   33734 command_runner.go:130] > #  allowed_annotations = []
	I1207 20:40:05.638391   33734 command_runner.go:130] > # Where:
	I1207 20:40:05.638403   33734 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1207 20:40:05.638417   33734 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1207 20:40:05.638427   33734 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1207 20:40:05.638440   33734 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1207 20:40:05.638449   33734 command_runner.go:130] > #   in $PATH.
	I1207 20:40:05.638455   33734 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1207 20:40:05.638463   33734 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1207 20:40:05.638469   33734 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1207 20:40:05.638475   33734 command_runner.go:130] > #   state.
	I1207 20:40:05.638481   33734 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1207 20:40:05.638489   33734 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1207 20:40:05.638496   33734 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1207 20:40:05.638508   33734 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1207 20:40:05.638518   33734 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1207 20:40:05.638527   33734 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1207 20:40:05.638535   33734 command_runner.go:130] > #   The currently recognized values are:
	I1207 20:40:05.638546   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1207 20:40:05.638562   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1207 20:40:05.638606   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1207 20:40:05.638620   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1207 20:40:05.638635   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1207 20:40:05.638650   33734 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1207 20:40:05.638664   33734 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1207 20:40:05.638679   33734 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1207 20:40:05.638694   33734 command_runner.go:130] > #   should be moved to the container's cgroup
	I1207 20:40:05.638705   33734 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1207 20:40:05.638716   33734 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1207 20:40:05.638726   33734 command_runner.go:130] > runtime_type = "oci"
	I1207 20:40:05.638735   33734 command_runner.go:130] > runtime_root = "/run/runc"
	I1207 20:40:05.638747   33734 command_runner.go:130] > runtime_config_path = ""
	I1207 20:40:05.638758   33734 command_runner.go:130] > monitor_path = ""
	I1207 20:40:05.638768   33734 command_runner.go:130] > monitor_cgroup = ""
	I1207 20:40:05.638778   33734 command_runner.go:130] > monitor_exec_cgroup = ""
	I1207 20:40:05.638790   33734 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1207 20:40:05.638798   33734 command_runner.go:130] > # running containers
	I1207 20:40:05.638808   33734 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1207 20:40:05.638822   33734 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1207 20:40:05.638856   33734 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1207 20:40:05.638870   33734 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1207 20:40:05.638882   33734 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1207 20:40:05.638894   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1207 20:40:05.638905   33734 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1207 20:40:05.638916   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1207 20:40:05.638934   33734 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1207 20:40:05.638945   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1207 20:40:05.638957   33734 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1207 20:40:05.638970   33734 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1207 20:40:05.638984   33734 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1207 20:40:05.639003   33734 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1207 20:40:05.639019   33734 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1207 20:40:05.639033   33734 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1207 20:40:05.639051   33734 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1207 20:40:05.639068   33734 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1207 20:40:05.639081   33734 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1207 20:40:05.639097   33734 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1207 20:40:05.639106   33734 command_runner.go:130] > # Example:
	I1207 20:40:05.639115   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1207 20:40:05.639126   33734 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1207 20:40:05.639138   33734 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1207 20:40:05.639151   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1207 20:40:05.639161   33734 command_runner.go:130] > # cpuset = 0
	I1207 20:40:05.639172   33734 command_runner.go:130] > # cpushares = "0-1"
	I1207 20:40:05.639182   33734 command_runner.go:130] > # Where:
	I1207 20:40:05.639191   33734 command_runner.go:130] > # The workload name is workload-type.
	I1207 20:40:05.639205   33734 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1207 20:40:05.639220   33734 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1207 20:40:05.639234   33734 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1207 20:40:05.639253   33734 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1207 20:40:05.639267   33734 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1207 20:40:05.639277   33734 command_runner.go:130] > # 
	I1207 20:40:05.639289   33734 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1207 20:40:05.639297   33734 command_runner.go:130] > #
	I1207 20:40:05.639308   33734 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1207 20:40:05.639322   33734 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1207 20:40:05.639337   33734 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1207 20:40:05.639351   33734 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1207 20:40:05.639365   33734 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1207 20:40:05.639375   33734 command_runner.go:130] > [crio.image]
	I1207 20:40:05.639388   33734 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1207 20:40:05.639400   33734 command_runner.go:130] > # default_transport = "docker://"
	I1207 20:40:05.639415   33734 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1207 20:40:05.639429   33734 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:40:05.639440   33734 command_runner.go:130] > # global_auth_file = ""
	I1207 20:40:05.639451   33734 command_runner.go:130] > # The image used to instantiate infra containers.
	I1207 20:40:05.639463   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:40:05.639476   33734 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1207 20:40:05.639491   33734 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1207 20:40:05.639504   33734 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:40:05.639516   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:40:05.639525   33734 command_runner.go:130] > # pause_image_auth_file = ""
	I1207 20:40:05.639561   33734 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1207 20:40:05.639576   33734 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1207 20:40:05.639590   33734 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1207 20:40:05.639603   33734 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1207 20:40:05.639615   33734 command_runner.go:130] > # pause_command = "/pause"
	I1207 20:40:05.639629   33734 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1207 20:40:05.639643   33734 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1207 20:40:05.639658   33734 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1207 20:40:05.639672   33734 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1207 20:40:05.639685   33734 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1207 20:40:05.639696   33734 command_runner.go:130] > # signature_policy = ""
	I1207 20:40:05.639711   33734 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1207 20:40:05.639726   33734 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1207 20:40:05.639736   33734 command_runner.go:130] > # changing them here.
	I1207 20:40:05.639745   33734 command_runner.go:130] > # insecure_registries = [
	I1207 20:40:05.639753   33734 command_runner.go:130] > # ]
	I1207 20:40:05.639765   33734 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1207 20:40:05.639777   33734 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1207 20:40:05.639790   33734 command_runner.go:130] > # image_volumes = "mkdir"
	I1207 20:40:05.639802   33734 command_runner.go:130] > # Temporary directory to use for storing big files
	I1207 20:40:05.639813   33734 command_runner.go:130] > # big_files_temporary_dir = ""
	I1207 20:40:05.639825   33734 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1207 20:40:05.639835   33734 command_runner.go:130] > # CNI plugins.
	I1207 20:40:05.639846   33734 command_runner.go:130] > [crio.network]
	I1207 20:40:05.639859   33734 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1207 20:40:05.639872   33734 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1207 20:40:05.639883   33734 command_runner.go:130] > # cni_default_network = ""
	I1207 20:40:05.639894   33734 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1207 20:40:05.639905   33734 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1207 20:40:05.639918   33734 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1207 20:40:05.639933   33734 command_runner.go:130] > # plugin_dirs = [
	I1207 20:40:05.639944   33734 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1207 20:40:05.639954   33734 command_runner.go:130] > # ]
	I1207 20:40:05.639964   33734 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1207 20:40:05.639974   33734 command_runner.go:130] > [crio.metrics]
	I1207 20:40:05.639986   33734 command_runner.go:130] > # Globally enable or disable metrics support.
	I1207 20:40:05.639997   33734 command_runner.go:130] > enable_metrics = true
	I1207 20:40:05.640008   33734 command_runner.go:130] > # Specify enabled metrics collectors.
	I1207 20:40:05.640019   33734 command_runner.go:130] > # Per default all metrics are enabled.
	I1207 20:40:05.640030   33734 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1207 20:40:05.640044   33734 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1207 20:40:05.640058   33734 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1207 20:40:05.640068   33734 command_runner.go:130] > # metrics_collectors = [
	I1207 20:40:05.640076   33734 command_runner.go:130] > # 	"operations",
	I1207 20:40:05.640086   33734 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1207 20:40:05.640097   33734 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1207 20:40:05.640107   33734 command_runner.go:130] > # 	"operations_errors",
	I1207 20:40:05.640116   33734 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1207 20:40:05.640127   33734 command_runner.go:130] > # 	"image_pulls_by_name",
	I1207 20:40:05.640137   33734 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1207 20:40:05.640147   33734 command_runner.go:130] > # 	"image_pulls_failures",
	I1207 20:40:05.640158   33734 command_runner.go:130] > # 	"image_pulls_successes",
	I1207 20:40:05.640169   33734 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1207 20:40:05.640180   33734 command_runner.go:130] > # 	"image_layer_reuse",
	I1207 20:40:05.640189   33734 command_runner.go:130] > # 	"containers_oom_total",
	I1207 20:40:05.640199   33734 command_runner.go:130] > # 	"containers_oom",
	I1207 20:40:05.640210   33734 command_runner.go:130] > # 	"processes_defunct",
	I1207 20:40:05.640220   33734 command_runner.go:130] > # 	"operations_total",
	I1207 20:40:05.640230   33734 command_runner.go:130] > # 	"operations_latency_seconds",
	I1207 20:40:05.640241   33734 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1207 20:40:05.640252   33734 command_runner.go:130] > # 	"operations_errors_total",
	I1207 20:40:05.640262   33734 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1207 20:40:05.640274   33734 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1207 20:40:05.640282   33734 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1207 20:40:05.640290   33734 command_runner.go:130] > # 	"image_pulls_success_total",
	I1207 20:40:05.640316   33734 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1207 20:40:05.640329   33734 command_runner.go:130] > # 	"containers_oom_count_total",
	I1207 20:40:05.640339   33734 command_runner.go:130] > # ]
	I1207 20:40:05.640352   33734 command_runner.go:130] > # The port on which the metrics server will listen.
	I1207 20:40:05.640363   33734 command_runner.go:130] > # metrics_port = 9090
	I1207 20:40:05.640376   33734 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1207 20:40:05.640387   33734 command_runner.go:130] > # metrics_socket = ""
	I1207 20:40:05.640397   33734 command_runner.go:130] > # The certificate for the secure metrics server.
	I1207 20:40:05.640411   33734 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1207 20:40:05.640425   33734 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1207 20:40:05.640441   33734 command_runner.go:130] > # certificate on any modification event.
	I1207 20:40:05.640448   33734 command_runner.go:130] > # metrics_cert = ""
	I1207 20:40:05.640457   33734 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1207 20:40:05.640469   33734 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1207 20:40:05.640476   33734 command_runner.go:130] > # metrics_key = ""
	I1207 20:40:05.640485   33734 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1207 20:40:05.640494   33734 command_runner.go:130] > [crio.tracing]
	I1207 20:40:05.640504   33734 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1207 20:40:05.640515   33734 command_runner.go:130] > # enable_tracing = false
	I1207 20:40:05.640525   33734 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1207 20:40:05.640536   33734 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1207 20:40:05.640548   33734 command_runner.go:130] > # Number of samples to collect per million spans.
	I1207 20:40:05.640559   33734 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1207 20:40:05.640572   33734 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1207 20:40:05.640580   33734 command_runner.go:130] > [crio.stats]
	I1207 20:40:05.640607   33734 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1207 20:40:05.640620   33734 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1207 20:40:05.640631   33734 command_runner.go:130] > # stats_collection_period = 0
	I1207 20:40:05.640715   33734 command_runner.go:130] ! time="2023-12-07 20:40:05.618515609Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1207 20:40:05.640741   33734 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1207 20:40:05.640820   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:40:05.640834   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:40:05.640849   33734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:40:05.640875   33734 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-660958 NodeName:multinode-660958-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:40:05.641018   33734 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-660958-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:40:05.641069   33734 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-660958-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:40:05.641143   33734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:40:05.651798   33734 command_runner.go:130] > kubeadm
	I1207 20:40:05.651819   33734 command_runner.go:130] > kubectl
	I1207 20:40:05.651823   33734 command_runner.go:130] > kubelet
	I1207 20:40:05.651909   33734 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:40:05.651982   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 20:40:05.662194   33734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1207 20:40:05.678048   33734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:40:05.695236   33734 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I1207 20:40:05.699317   33734 command_runner.go:130] > 192.168.39.19	control-plane.minikube.internal
	I1207 20:40:05.699463   33734 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:40:05.699761   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:40:05.699826   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:40:05.699873   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:40:05.714559   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I1207 20:40:05.715025   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:40:05.715494   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:40:05.715519   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:40:05.715824   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:40:05.716009   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:40:05.716154   33734 start.go:304] JoinCluster: &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:40:05.716255   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1207 20:40:05.716270   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:40:05.719199   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:40:05.719649   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:40:05.719681   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:40:05.719814   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:40:05.719991   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:40:05.720132   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:40:05.720261   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:40:05.908437   33734 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j3eony.5pnpyhm8u7atsgka --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:40:05.908629   33734 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:40:05.908671   33734 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:40:05.908996   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:40:05.909034   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:40:05.923165   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I1207 20:40:05.923585   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:40:05.924064   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:40:05.924091   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:40:05.924391   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:40:05.924550   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:40:05.924716   33734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-660958-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1207 20:40:05.924735   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:40:05.927715   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:40:05.928213   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:40:05.928243   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:40:05.928378   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:40:05.928548   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:40:05.928730   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:40:05.928869   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:40:06.087169   33734 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1207 20:40:06.149847   33734 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-d764j, kube-system/kube-proxy-rxqfp
	I1207 20:40:09.172498   33734 command_runner.go:130] > node/multinode-660958-m02 cordoned
	I1207 20:40:09.172523   33734 command_runner.go:130] > pod "busybox-5bc68d56bd-vllfc" has DeletionTimestamp older than 1 seconds, skipping
	I1207 20:40:09.172530   33734 command_runner.go:130] > node/multinode-660958-m02 drained
	I1207 20:40:09.172553   33734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-660958-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.247816563s)
	I1207 20:40:09.172568   33734 node.go:108] successfully drained node "m02"
	I1207 20:40:09.172879   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:40:09.173089   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:40:09.173432   33734 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1207 20:40:09.173477   33734 round_trippers.go:463] DELETE https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:40:09.173490   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:09.173497   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:09.173503   33734 round_trippers.go:473]     Content-Type: application/json
	I1207 20:40:09.173509   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:09.189022   33734 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1207 20:40:09.189040   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:09.189046   33734 round_trippers.go:580]     Audit-Id: 5273a7a9-ce99-433d-9993-9f1ea1ccde4e
	I1207 20:40:09.189052   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:09.189057   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:09.189064   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:09.189072   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:09.189079   33734 round_trippers.go:580]     Content-Length: 171
	I1207 20:40:09.189086   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:09 GMT
	I1207 20:40:09.189534   33734 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-660958-m02","kind":"nodes","uid":"b5e716af-2f5f-4a15-b890-3b390e2dc712"}}
	I1207 20:40:09.189564   33734 node.go:124] successfully deleted node "m02"
	I1207 20:40:09.189573   33734 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:40:09.189598   33734 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:40:09.189619   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j3eony.5pnpyhm8u7atsgka --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-660958-m02"
	I1207 20:40:09.255494   33734 command_runner.go:130] ! W1207 20:40:09.242796    2663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1207 20:40:09.255548   33734 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1207 20:40:09.405847   33734 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1207 20:40:09.405878   33734 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1207 20:40:10.213179   33734 command_runner.go:130] > [preflight] Running pre-flight checks
	I1207 20:40:10.213209   33734 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1207 20:40:10.213222   33734 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1207 20:40:10.213233   33734 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:40:10.213240   33734 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:40:10.213245   33734 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1207 20:40:10.213251   33734 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1207 20:40:10.213257   33734 command_runner.go:130] > This node has joined the cluster:
	I1207 20:40:10.213263   33734 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1207 20:40:10.213272   33734 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1207 20:40:10.213281   33734 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1207 20:40:10.213306   33734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j3eony.5pnpyhm8u7atsgka --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-660958-m02": (1.023669043s)
	I1207 20:40:10.213325   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1207 20:40:10.470911   33734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=multinode-660958 minikube.k8s.io/updated_at=2023_12_07T20_40_10_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:40:10.609512   33734 command_runner.go:130] > node/multinode-660958-m02 labeled
	I1207 20:40:10.609538   33734 command_runner.go:130] > node/multinode-660958-m03 labeled
	I1207 20:40:10.609555   33734 start.go:306] JoinCluster complete in 4.893402211s
	I1207 20:40:10.609565   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:40:10.609570   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:40:10.609609   33734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 20:40:10.615698   33734 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1207 20:40:10.615715   33734 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1207 20:40:10.615722   33734 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1207 20:40:10.615728   33734 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:40:10.615734   33734 command_runner.go:130] > Access: 2023-12-07 20:37:40.624910444 +0000
	I1207 20:40:10.615739   33734 command_runner.go:130] > Modify: 2023-12-05 19:27:41.000000000 +0000
	I1207 20:40:10.615744   33734 command_runner.go:130] > Change: 2023-12-07 20:37:38.610910444 +0000
	I1207 20:40:10.615748   33734 command_runner.go:130] >  Birth: -
	I1207 20:40:10.616232   33734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 20:40:10.616244   33734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 20:40:10.637751   33734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 20:40:11.013881   33734 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:40:11.013904   33734 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:40:11.013911   33734 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1207 20:40:11.013915   33734 command_runner.go:130] > daemonset.apps/kindnet configured
	I1207 20:40:11.014305   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:40:11.014608   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:40:11.015060   33734 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:40:11.015078   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.015086   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.015093   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.019712   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:40:11.019735   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.019745   33734 round_trippers.go:580]     Audit-Id: d9afc30b-e6fd-4160-9568-869d5b29ab5a
	I1207 20:40:11.019754   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.019761   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.019770   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.019777   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.019791   33734 round_trippers.go:580]     Content-Length: 291
	I1207 20:40:11.019799   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.019869   33734 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"883","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1207 20:40:11.019980   33734 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-660958" context rescaled to 1 replicas
	I1207 20:40:11.020015   33734 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1207 20:40:11.021976   33734 out.go:177] * Verifying Kubernetes components...
	I1207 20:40:11.023446   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:40:11.037206   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:40:11.037440   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:40:11.037703   33734 node_ready.go:35] waiting up to 6m0s for node "multinode-660958-m02" to be "Ready" ...
	I1207 20:40:11.037778   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:40:11.037789   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.037801   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.037809   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.040342   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.040366   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.040374   33734 round_trippers.go:580]     Audit-Id: 26dcc9ac-ba95-4cc3-89ef-f0917a724868
	I1207 20:40:11.040379   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.040385   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.040390   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.040395   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.040401   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.040572   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"1301da57-5d53-4659-aa78-22c7b081e11a","resourceVersion":"1032","creationTimestamp":"2023-12-07T20:40:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_40_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I1207 20:40:11.040879   33734 node_ready.go:49] node "multinode-660958-m02" has status "Ready":"True"
	I1207 20:40:11.040894   33734 node_ready.go:38] duration metric: took 3.175777ms waiting for node "multinode-660958-m02" to be "Ready" ...
	I1207 20:40:11.040902   33734 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:40:11.040962   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:40:11.040974   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.040985   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.040993   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.045010   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:40:11.045030   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.045039   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.045046   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.045054   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.045062   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.045069   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.045078   33734 round_trippers.go:580]     Audit-Id: 25d3950c-79fb-49a6-85ec-ab2b65c25f89
	I1207 20:40:11.047485   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1039"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82190 chars]
	I1207 20:40:11.050837   33734 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.050922   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:40:11.050934   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.050941   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.050951   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.054042   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:11.054059   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.054069   33734 round_trippers.go:580]     Audit-Id: 96ab1b01-47ec-4bc8-81d4-ba9c607d7afa
	I1207 20:40:11.054077   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.054086   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.054103   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.054111   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.054120   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.054462   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1207 20:40:11.054948   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.054966   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.054977   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.054986   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.057644   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.057661   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.057668   33734 round_trippers.go:580]     Audit-Id: 22cdb295-bbec-49ea-8347-c9fa9d487df2
	I1207 20:40:11.057673   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.057680   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.057688   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.057700   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.057707   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.057965   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:11.058305   33734 pod_ready.go:92] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.058320   33734 pod_ready.go:81] duration metric: took 7.461716ms waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.058328   33734 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.058375   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:40:11.058383   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.058390   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.058396   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.060494   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.060512   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.060521   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.060526   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.060532   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.060541   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.060557   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.060566   33734 round_trippers.go:580]     Audit-Id: 87ca2960-8f2a-4c0f-8c5f-96c7493d9135
	I1207 20:40:11.060791   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"852","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1207 20:40:11.061125   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.061138   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.061148   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.061159   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.064344   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:11.064360   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.064366   33734 round_trippers.go:580]     Audit-Id: e706d85c-20c1-4f1a-829e-5f9dbf4589f8
	I1207 20:40:11.064371   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.064378   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.064386   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.064395   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.064403   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.064754   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:11.065044   33734 pod_ready.go:92] pod "etcd-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.065058   33734 pod_ready.go:81] duration metric: took 6.725572ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.065077   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.065132   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:40:11.065141   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.065148   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.065156   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.068510   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:11.068529   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.068538   33734 round_trippers.go:580]     Audit-Id: 2207fe5d-ad64-427c-975d-b9e241e50ff0
	I1207 20:40:11.068546   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.068554   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.068563   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.068572   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.068582   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.068743   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"856","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1207 20:40:11.069156   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.069169   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.069176   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.069181   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.071317   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.071338   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.071348   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.071356   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.071362   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.071371   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.071379   33734 round_trippers.go:580]     Audit-Id: 6b6414e3-a68b-47aa-a6d4-ff10735b1fcf
	I1207 20:40:11.071392   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.071635   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:11.072003   33734 pod_ready.go:92] pod "kube-apiserver-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.072023   33734 pod_ready.go:81] duration metric: took 6.937165ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.072036   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.072084   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:40:11.072095   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.072106   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.072120   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.074633   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.074646   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.074653   33734 round_trippers.go:580]     Audit-Id: ec66de31-4ff1-41be-90cd-1a69e9d627fb
	I1207 20:40:11.074658   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.074663   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.074670   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.074678   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.074685   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.074936   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"871","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1207 20:40:11.075272   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.075286   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.075295   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.075304   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.077996   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.078015   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.078030   33734 round_trippers.go:580]     Audit-Id: 84dfb5a2-a946-4ba1-aac3-cdeee6fe2f7c
	I1207 20:40:11.078039   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.078047   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.078055   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.078067   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.078073   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.078365   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:11.078671   33734 pod_ready.go:92] pod "kube-controller-manager-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.078687   33734 pod_ready.go:81] duration metric: took 6.643998ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.078694   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.238082   33734 request.go:629] Waited for 159.331806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:40:11.238134   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:40:11.238139   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.238147   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.238153   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.240734   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.240757   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.240766   33734 round_trippers.go:580]     Audit-Id: a4503182-0486-4f9d-a33e-a1f73d5febea
	I1207 20:40:11.240774   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.240782   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.240795   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.240803   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.240810   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.240955   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"696","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1207 20:40:11.438816   33734 request.go:629] Waited for 197.377886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:40:11.438869   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:40:11.438874   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.438882   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.438888   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.441434   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.441451   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.441458   33734 round_trippers.go:580]     Audit-Id: 77b1b6ac-89c2-4021-97fe-6737e06566c2
	I1207 20:40:11.441463   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.441470   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.441477   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.441485   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.441492   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.441628   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"99d6ae8d-c617-438e-918b-4f4d3c4699de","resourceVersion":"1033","creationTimestamp":"2023-12-07T20:30:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_40_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:30:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3965 chars]
	I1207 20:40:11.441932   33734 pod_ready.go:92] pod "kube-proxy-mjptg" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.441951   33734 pod_ready.go:81] duration metric: took 363.250009ms waiting for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.441963   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.638353   33734 request.go:629] Waited for 196.324144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:40:11.638405   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:40:11.638410   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.638417   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.638423   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.641173   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:11.641212   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.641221   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.641229   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.641237   33734 round_trippers.go:580]     Audit-Id: 7f8b754e-3237-4677-bdbb-7633544e22fa
	I1207 20:40:11.641245   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.641258   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.641268   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.641468   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"789","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:40:11.838203   33734 request.go:629] Waited for 196.297109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.838271   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:11.838277   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:11.838284   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:11.838290   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:11.841809   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:11.841831   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:11.841840   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:11.841848   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:11.841855   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:11.841863   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:11 GMT
	I1207 20:40:11.841870   33734 round_trippers.go:580]     Audit-Id: 84193037-a89f-46a9-bec0-8b5ffb671be3
	I1207 20:40:11.841879   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:11.842071   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:11.842449   33734 pod_ready.go:92] pod "kube-proxy-pfc45" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:11.842469   33734 pod_ready.go:81] duration metric: took 400.499308ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:11.842484   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:12.037884   33734 request.go:629] Waited for 195.309621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:40:12.037966   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:40:12.037973   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:12.037981   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:12.037987   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:12.041039   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:12.041060   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:12.041070   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:12.041080   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:12.041089   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:12.041098   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:12 GMT
	I1207 20:40:12.041107   33734 round_trippers.go:580]     Audit-Id: 8514ddf3-a114-4ae1-b1ca-61e6e8350468
	I1207 20:40:12.041118   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:12.041295   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxqfp","generateName":"kube-proxy-","namespace":"kube-system","uid":"c06f17e2-4050-4554-8c4a-057bca0bb5ff","resourceVersion":"1051","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1207 20:40:12.238031   33734 request.go:629] Waited for 196.315629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:40:12.238092   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:40:12.238109   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:12.238117   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:12.238124   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:12.242105   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:12.242124   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:12.242131   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:12.242137   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:12.242142   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:12 GMT
	I1207 20:40:12.242147   33734 round_trippers.go:580]     Audit-Id: d0c6896f-3476-4aa6-b302-b72b37f03bb4
	I1207 20:40:12.242152   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:12.242157   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:12.242758   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"1301da57-5d53-4659-aa78-22c7b081e11a","resourceVersion":"1032","creationTimestamp":"2023-12-07T20:40:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_40_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I1207 20:40:12.243016   33734 pod_ready.go:92] pod "kube-proxy-rxqfp" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:12.243034   33734 pod_ready.go:81] duration metric: took 400.542919ms waiting for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:12.243042   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:12.438477   33734 request.go:629] Waited for 195.380967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:40:12.438549   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:40:12.438558   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:12.438571   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:12.438579   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:12.441226   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:40:12.441249   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:12.441259   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:12.441268   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:12.441275   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:12 GMT
	I1207 20:40:12.441284   33734 round_trippers.go:580]     Audit-Id: 42ab81e2-0473-4ed1-80ba-84d3dbde0ce1
	I1207 20:40:12.441292   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:12.441307   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:12.441714   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"849","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1207 20:40:12.638510   33734 request.go:629] Waited for 196.380357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:12.638572   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:40:12.638577   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:12.638584   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:12.638591   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:12.641660   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:12.641677   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:12.641683   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:12 GMT
	I1207 20:40:12.641689   33734 round_trippers.go:580]     Audit-Id: 4ace9a02-324c-4098-ac0e-79d7bdafe4a6
	I1207 20:40:12.641694   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:12.641699   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:12.641704   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:12.641709   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:12.641876   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:40:12.642325   33734 pod_ready.go:92] pod "kube-scheduler-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:40:12.642344   33734 pod_ready.go:81] duration metric: took 399.297675ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:40:12.642354   33734 pod_ready.go:38] duration metric: took 1.601444317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:40:12.642364   33734 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:40:12.642404   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:40:12.655601   33734 system_svc.go:56] duration metric: took 13.230421ms WaitForService to wait for kubelet.
	I1207 20:40:12.655627   33734 kubeadm.go:581] duration metric: took 1.635576547s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:40:12.655649   33734 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:40:12.838024   33734 request.go:629] Waited for 182.308009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1207 20:40:12.838088   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:40:12.838120   33734 round_trippers.go:469] Request Headers:
	I1207 20:40:12.838136   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:40:12.838150   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:40:12.841640   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:40:12.841664   33734 round_trippers.go:577] Response Headers:
	I1207 20:40:12.841673   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:40:12.841694   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:40:12 GMT
	I1207 20:40:12.841702   33734 round_trippers.go:580]     Audit-Id: 8313da9b-783c-4ec4-a017-8275bcc53c0e
	I1207 20:40:12.841709   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:40:12.841718   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:40:12.841727   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:40:12.842401   33734 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1056"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16437 chars]
	I1207 20:40:12.843001   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:40:12.843025   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:40:12.843035   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:40:12.843045   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:40:12.843057   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:40:12.843065   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:40:12.843075   33734 node_conditions.go:105] duration metric: took 187.420692ms to run NodePressure ...
	I1207 20:40:12.843090   33734 start.go:228] waiting for startup goroutines ...
	I1207 20:40:12.843110   33734 start.go:242] writing updated cluster config ...
	I1207 20:40:12.843586   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:40:12.843685   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:40:12.847435   33734 out.go:177] * Starting worker node multinode-660958-m03 in cluster multinode-660958
	I1207 20:40:12.849050   33734 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:40:12.849072   33734 cache.go:56] Caching tarball of preloaded images
	I1207 20:40:12.849230   33734 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 20:40:12.849251   33734 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 20:40:12.849347   33734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/config.json ...
	I1207 20:40:12.849512   33734 start.go:365] acquiring machines lock for multinode-660958-m03: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:40:12.849552   33734 start.go:369] acquired machines lock for "multinode-660958-m03" in 22.973µs
	I1207 20:40:12.849565   33734 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:40:12.849572   33734 fix.go:54] fixHost starting: m03
	I1207 20:40:12.849920   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:40:12.849974   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:40:12.865774   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I1207 20:40:12.866255   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:40:12.866748   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:40:12.866774   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:40:12.867060   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:40:12.867227   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:40:12.867339   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetState
	I1207 20:40:12.868789   33734 fix.go:102] recreateIfNeeded on multinode-660958-m03: state=Running err=<nil>
	W1207 20:40:12.868805   33734 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:40:12.870806   33734 out.go:177] * Updating the running kvm2 "multinode-660958-m03" VM ...
	I1207 20:40:12.872345   33734 machine.go:88] provisioning docker machine ...
	I1207 20:40:12.872369   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:40:12.872547   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetMachineName
	I1207 20:40:12.872679   33734 buildroot.go:166] provisioning hostname "multinode-660958-m03"
	I1207 20:40:12.872697   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetMachineName
	I1207 20:40:12.872837   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:40:12.875122   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:12.875543   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:12.875574   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:12.875740   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:40:12.875912   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:12.876082   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:12.876235   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:40:12.876435   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:40:12.876761   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1207 20:40:12.876776   33734 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-660958-m03 && echo "multinode-660958-m03" | sudo tee /etc/hostname
	I1207 20:40:13.017823   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-660958-m03
	
	I1207 20:40:13.017854   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:40:13.020642   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.021037   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:13.021059   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.021295   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:40:13.021494   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:13.021667   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:13.021834   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:40:13.022044   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:40:13.022369   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1207 20:40:13.022393   33734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-660958-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-660958-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-660958-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:40:13.142623   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:40:13.142653   33734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:40:13.142672   33734 buildroot.go:174] setting up certificates
	I1207 20:40:13.142684   33734 provision.go:83] configureAuth start
	I1207 20:40:13.142696   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetMachineName
	I1207 20:40:13.142966   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetIP
	I1207 20:40:13.145732   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.146133   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:13.146172   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.146282   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:40:13.148614   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.148933   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:13.148957   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.149091   33734 provision.go:138] copyHostCerts
	I1207 20:40:13.149116   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:40:13.149147   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:40:13.149158   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:40:13.149243   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:40:13.149342   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:40:13.149368   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:40:13.149378   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:40:13.149415   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:40:13.149471   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:40:13.149494   33734 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:40:13.149503   33734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:40:13.149541   33734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:40:13.149601   33734 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.multinode-660958-m03 san=[192.168.39.20 192.168.39.20 localhost 127.0.0.1 minikube multinode-660958-m03]
	I1207 20:40:13.251994   33734 provision.go:172] copyRemoteCerts
	I1207 20:40:13.252060   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:40:13.252090   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:40:13.254860   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.255187   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:13.255210   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.255401   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:40:13.255589   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:13.255745   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:40:13.255885   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m03/id_rsa Username:docker}
	I1207 20:40:13.347632   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:40:13.347708   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1207 20:40:13.373184   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:40:13.373272   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:40:13.397127   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:40:13.397190   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:40:13.420595   33734 provision.go:86] duration metric: configureAuth took 277.889217ms
	I1207 20:40:13.420629   33734 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:40:13.420902   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:40:13.420979   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:40:13.423606   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.424015   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:40:13.424046   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:40:13.424191   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:40:13.424387   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:13.424527   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:40:13.424647   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:40:13.424807   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:40:13.425129   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1207 20:40:13.425151   33734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:41:43.893365   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:41:43.893399   33734 machine.go:91] provisioned docker machine in 1m31.021038331s
	I1207 20:41:43.893411   33734 start.go:300] post-start starting for "multinode-660958-m03" (driver="kvm2")
	I1207 20:41:43.893428   33734 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:41:43.893452   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:41:43.893810   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:41:43.893836   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:41:43.896779   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:43.897250   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:43.897275   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:43.897415   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:41:43.897606   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:41:43.897783   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:41:43.897899   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m03/id_rsa Username:docker}
	I1207 20:41:43.988938   33734 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:41:43.992926   33734 command_runner.go:130] > NAME=Buildroot
	I1207 20:41:43.992943   33734 command_runner.go:130] > VERSION=2021.02.12-1-ge2b7375-dirty
	I1207 20:41:43.992947   33734 command_runner.go:130] > ID=buildroot
	I1207 20:41:43.992953   33734 command_runner.go:130] > VERSION_ID=2021.02.12
	I1207 20:41:43.992957   33734 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1207 20:41:43.993246   33734 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:41:43.993263   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:41:43.993333   33734 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:41:43.993406   33734 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:41:43.993415   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /etc/ssl/certs/168402.pem
	I1207 20:41:43.993488   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:41:44.002557   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:41:44.024915   33734 start.go:303] post-start completed in 131.488901ms
	I1207 20:41:44.024937   33734 fix.go:56] fixHost completed within 1m31.175364063s
	I1207 20:41:44.024979   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:41:44.027644   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.028033   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:44.028066   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.028262   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:41:44.028448   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:41:44.028597   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:41:44.028749   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:41:44.028904   33734 main.go:141] libmachine: Using SSH client type: native
	I1207 20:41:44.029198   33734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1207 20:41:44.029211   33734 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:41:44.150914   33734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701981704.142878591
	
	I1207 20:41:44.150937   33734 fix.go:206] guest clock: 1701981704.142878591
	I1207 20:41:44.150946   33734 fix.go:219] Guest: 2023-12-07 20:41:44.142878591 +0000 UTC Remote: 2023-12-07 20:41:44.024942356 +0000 UTC m=+554.730663326 (delta=117.936235ms)
	I1207 20:41:44.150964   33734 fix.go:190] guest clock delta is within tolerance: 117.936235ms
	I1207 20:41:44.150970   33734 start.go:83] releasing machines lock for "multinode-660958-m03", held for 1m31.301409703s
	I1207 20:41:44.150996   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:41:44.151218   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetIP
	I1207 20:41:44.153880   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.154367   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:44.154389   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.156360   33734 out.go:177] * Found network options:
	I1207 20:41:44.157845   33734 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.69
	W1207 20:41:44.159208   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	W1207 20:41:44.159229   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:41:44.159258   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:41:44.159742   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:41:44.159919   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .DriverName
	I1207 20:41:44.160006   33734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:41:44.160036   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	W1207 20:41:44.160111   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	W1207 20:41:44.160139   33734 proxy.go:119] fail to check proxy env: Error ip not in block
	I1207 20:41:44.160194   33734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:41:44.160210   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHHostname
	I1207 20:41:44.162658   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.162825   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.163019   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:44.163051   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.163165   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:41:44.163334   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:44.163358   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:44.163367   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:41:44.163501   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHPort
	I1207 20:41:44.163561   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:41:44.163650   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHKeyPath
	I1207 20:41:44.163722   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m03/id_rsa Username:docker}
	I1207 20:41:44.163785   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetSSHUsername
	I1207 20:41:44.163925   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m03/id_rsa Username:docker}
	I1207 20:41:44.404913   33734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1207 20:41:44.405040   33734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:41:44.411180   33734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1207 20:41:44.411210   33734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:41:44.411260   33734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:41:44.419823   33734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 20:41:44.419913   33734 start.go:475] detecting cgroup driver to use...
	I1207 20:41:44.419987   33734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:41:44.433799   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:41:44.446097   33734 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:41:44.446159   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:41:44.462149   33734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:41:44.475502   33734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:41:44.598445   33734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:41:44.731797   33734 docker.go:219] disabling docker service ...
	I1207 20:41:44.731859   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:41:44.748637   33734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:41:44.762246   33734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:41:44.881473   33734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:41:44.993680   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:41:45.009776   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:41:45.027467   33734 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1207 20:41:45.027726   33734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 20:41:45.027782   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:41:45.038052   33734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:41:45.038119   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:41:45.048644   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:41:45.058910   33734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:41:45.069350   33734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:41:45.079557   33734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:41:45.088078   33734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1207 20:41:45.088288   33734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:41:45.096952   33734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:41:45.215933   33734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:41:45.744075   33734 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:41:45.744148   33734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:41:45.749262   33734 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1207 20:41:45.749281   33734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1207 20:41:45.749287   33734 command_runner.go:130] > Device: 16h/22d	Inode: 1234        Links: 1
	I1207 20:41:45.749294   33734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:41:45.749301   33734 command_runner.go:130] > Access: 2023-12-07 20:41:45.670283994 +0000
	I1207 20:41:45.749306   33734 command_runner.go:130] > Modify: 2023-12-07 20:41:45.670283994 +0000
	I1207 20:41:45.749312   33734 command_runner.go:130] > Change: 2023-12-07 20:41:45.670283994 +0000
	I1207 20:41:45.749317   33734 command_runner.go:130] >  Birth: -
	I1207 20:41:45.749333   33734 start.go:543] Will wait 60s for crictl version
	I1207 20:41:45.749374   33734 ssh_runner.go:195] Run: which crictl
	I1207 20:41:45.753056   33734 command_runner.go:130] > /usr/bin/crictl
	I1207 20:41:45.753115   33734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:41:45.796588   33734 command_runner.go:130] > Version:  0.1.0
	I1207 20:41:45.796613   33734 command_runner.go:130] > RuntimeName:  cri-o
	I1207 20:41:45.796621   33734 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1207 20:41:45.796629   33734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1207 20:41:45.796666   33734 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:41:45.796727   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:41:45.846502   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:41:45.846519   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:41:45.846526   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:41:45.846531   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:41:45.846537   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:41:45.846542   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:41:45.846552   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:41:45.846561   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:41:45.846572   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:41:45.846585   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:41:45.846593   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:41:45.846599   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:41:45.848166   33734 ssh_runner.go:195] Run: crio --version
	I1207 20:41:45.892840   33734 command_runner.go:130] > crio version 1.24.1
	I1207 20:41:45.892864   33734 command_runner.go:130] > Version:          1.24.1
	I1207 20:41:45.892873   33734 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1207 20:41:45.892878   33734 command_runner.go:130] > GitTreeState:     dirty
	I1207 20:41:45.892886   33734 command_runner.go:130] > BuildDate:        2023-12-05T19:18:32Z
	I1207 20:41:45.892893   33734 command_runner.go:130] > GoVersion:        go1.19.9
	I1207 20:41:45.892899   33734 command_runner.go:130] > Compiler:         gc
	I1207 20:41:45.892906   33734 command_runner.go:130] > Platform:         linux/amd64
	I1207 20:41:45.892915   33734 command_runner.go:130] > Linkmode:         dynamic
	I1207 20:41:45.892930   33734 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1207 20:41:45.892941   33734 command_runner.go:130] > SeccompEnabled:   true
	I1207 20:41:45.892952   33734 command_runner.go:130] > AppArmorEnabled:  false
	I1207 20:41:45.896303   33734 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 20:41:45.897597   33734 out.go:177]   - env NO_PROXY=192.168.39.19
	I1207 20:41:45.898898   33734 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.69
	I1207 20:41:45.900120   33734 main.go:141] libmachine: (multinode-660958-m03) Calling .GetIP
	I1207 20:41:45.902567   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:45.902874   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:29:76", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:30:06 +0000 UTC Type:0 Mac:52:54:00:cd:29:76 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-660958-m03 Clientid:01:52:54:00:cd:29:76}
	I1207 20:41:45.902909   33734 main.go:141] libmachine: (multinode-660958-m03) DBG | domain multinode-660958-m03 has defined IP address 192.168.39.20 and MAC address 52:54:00:cd:29:76 in network mk-multinode-660958
	I1207 20:41:45.903100   33734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:41:45.907262   33734 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1207 20:41:45.907297   33734 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958 for IP: 192.168.39.20
	I1207 20:41:45.907318   33734 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:41:45.907452   33734 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:41:45.907507   33734 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:41:45.907522   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:41:45.907537   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:41:45.907550   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:41:45.907565   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:41:45.907635   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:41:45.907673   33734 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:41:45.907690   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:41:45.907730   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:41:45.907758   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:41:45.907791   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:41:45.907846   33734 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:41:45.907877   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> /usr/share/ca-certificates/168402.pem
	I1207 20:41:45.907896   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:41:45.907914   33734 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem -> /usr/share/ca-certificates/16840.pem
	I1207 20:41:45.908245   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:41:45.932408   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:41:45.961328   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:41:45.984579   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:41:46.007652   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:41:46.032545   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:41:46.057690   33734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:41:46.081564   33734 ssh_runner.go:195] Run: openssl version
	I1207 20:41:46.087365   33734 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1207 20:41:46.087429   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:41:46.097365   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:41:46.101849   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:41:46.101946   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:41:46.101998   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:41:46.107307   33734 command_runner.go:130] > 3ec20f2e
	I1207 20:41:46.107589   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:41:46.115966   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:41:46.125593   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:41:46.129822   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:41:46.129983   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:41:46.130035   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:41:46.135851   33734 command_runner.go:130] > b5213941
	I1207 20:41:46.135907   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:41:46.144004   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:41:46.153511   33734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:41:46.158218   33734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:41:46.158243   33734 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:41:46.158280   33734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:41:46.163723   33734 command_runner.go:130] > 51391683
	I1207 20:41:46.163782   33734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:41:46.172683   33734 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:41:46.176619   33734 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:41:46.176732   33734 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:41:46.176816   33734 ssh_runner.go:195] Run: crio config
	I1207 20:41:46.233286   33734 command_runner.go:130] ! time="2023-12-07 20:41:46.225352424Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1207 20:41:46.233313   33734 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1207 20:41:46.240327   33734 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1207 20:41:46.240367   33734 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1207 20:41:46.240378   33734 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1207 20:41:46.240384   33734 command_runner.go:130] > #
	I1207 20:41:46.240395   33734 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1207 20:41:46.240405   33734 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1207 20:41:46.240418   33734 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1207 20:41:46.240429   33734 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1207 20:41:46.240439   33734 command_runner.go:130] > # reload'.
	I1207 20:41:46.240448   33734 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1207 20:41:46.240462   33734 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1207 20:41:46.240472   33734 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1207 20:41:46.240481   33734 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1207 20:41:46.240485   33734 command_runner.go:130] > [crio]
	I1207 20:41:46.240496   33734 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1207 20:41:46.240506   33734 command_runner.go:130] > # containers images, in this directory.
	I1207 20:41:46.240516   33734 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1207 20:41:46.240530   33734 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1207 20:41:46.240542   33734 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1207 20:41:46.240555   33734 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1207 20:41:46.240568   33734 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1207 20:41:46.240578   33734 command_runner.go:130] > storage_driver = "overlay"
	I1207 20:41:46.240602   33734 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1207 20:41:46.240614   33734 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1207 20:41:46.240627   33734 command_runner.go:130] > storage_option = [
	I1207 20:41:46.240637   33734 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1207 20:41:46.240645   33734 command_runner.go:130] > ]
	I1207 20:41:46.240657   33734 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1207 20:41:46.240670   33734 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1207 20:41:46.240678   33734 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1207 20:41:46.240690   33734 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1207 20:41:46.240702   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1207 20:41:46.240713   33734 command_runner.go:130] > # always happen on a node reboot
	I1207 20:41:46.240725   33734 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1207 20:41:46.240736   33734 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1207 20:41:46.240744   33734 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1207 20:41:46.240753   33734 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1207 20:41:46.240760   33734 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1207 20:41:46.240768   33734 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1207 20:41:46.240778   33734 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1207 20:41:46.240783   33734 command_runner.go:130] > # internal_wipe = true
	I1207 20:41:46.240797   33734 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1207 20:41:46.240806   33734 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1207 20:41:46.240812   33734 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1207 20:41:46.240820   33734 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1207 20:41:46.240826   33734 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1207 20:41:46.240832   33734 command_runner.go:130] > [crio.api]
	I1207 20:41:46.240838   33734 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1207 20:41:46.240845   33734 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1207 20:41:46.240850   33734 command_runner.go:130] > # IP address on which the stream server will listen.
	I1207 20:41:46.240854   33734 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1207 20:41:46.240861   33734 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1207 20:41:46.240866   33734 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1207 20:41:46.240870   33734 command_runner.go:130] > # stream_port = "0"
	I1207 20:41:46.240875   33734 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1207 20:41:46.240879   33734 command_runner.go:130] > # stream_enable_tls = false
	I1207 20:41:46.240887   33734 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1207 20:41:46.240891   33734 command_runner.go:130] > # stream_idle_timeout = ""
	I1207 20:41:46.240897   33734 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1207 20:41:46.240906   33734 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1207 20:41:46.240910   33734 command_runner.go:130] > # minutes.
	I1207 20:41:46.240916   33734 command_runner.go:130] > # stream_tls_cert = ""
	I1207 20:41:46.240922   33734 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1207 20:41:46.240931   33734 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1207 20:41:46.240935   33734 command_runner.go:130] > # stream_tls_key = ""
	I1207 20:41:46.240941   33734 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1207 20:41:46.240952   33734 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1207 20:41:46.240957   33734 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1207 20:41:46.240961   33734 command_runner.go:130] > # stream_tls_ca = ""
	I1207 20:41:46.240969   33734 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:41:46.240974   33734 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1207 20:41:46.240981   33734 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1207 20:41:46.240986   33734 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1207 20:41:46.241001   33734 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1207 20:41:46.241009   33734 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1207 20:41:46.241014   33734 command_runner.go:130] > [crio.runtime]
	I1207 20:41:46.241021   33734 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1207 20:41:46.241032   33734 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1207 20:41:46.241043   33734 command_runner.go:130] > # "nofile=1024:2048"
	I1207 20:41:46.241057   33734 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1207 20:41:46.241068   33734 command_runner.go:130] > # default_ulimits = [
	I1207 20:41:46.241077   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241086   33734 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1207 20:41:46.241093   33734 command_runner.go:130] > # no_pivot = false
	I1207 20:41:46.241099   33734 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1207 20:41:46.241105   33734 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1207 20:41:46.241113   33734 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1207 20:41:46.241118   33734 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1207 20:41:46.241126   33734 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1207 20:41:46.241134   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:41:46.241146   33734 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1207 20:41:46.241157   33734 command_runner.go:130] > # Cgroup setting for conmon
	I1207 20:41:46.241169   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1207 20:41:46.241180   33734 command_runner.go:130] > conmon_cgroup = "pod"
	I1207 20:41:46.241193   33734 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1207 20:41:46.241201   33734 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1207 20:41:46.241211   33734 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1207 20:41:46.241217   33734 command_runner.go:130] > conmon_env = [
	I1207 20:41:46.241224   33734 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1207 20:41:46.241231   33734 command_runner.go:130] > ]
	I1207 20:41:46.241239   33734 command_runner.go:130] > # Additional environment variables to set for all the
	I1207 20:41:46.241251   33734 command_runner.go:130] > # containers. These are overridden if set in the
	I1207 20:41:46.241264   33734 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1207 20:41:46.241275   33734 command_runner.go:130] > # default_env = [
	I1207 20:41:46.241284   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241296   33734 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1207 20:41:46.241306   33734 command_runner.go:130] > # selinux = false
	I1207 20:41:46.241318   33734 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1207 20:41:46.241327   33734 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1207 20:41:46.241336   33734 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1207 20:41:46.241351   33734 command_runner.go:130] > # seccomp_profile = ""
	I1207 20:41:46.241364   33734 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1207 20:41:46.241377   33734 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1207 20:41:46.241391   33734 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1207 20:41:46.241401   33734 command_runner.go:130] > # which might increase security.
	I1207 20:41:46.241412   33734 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1207 20:41:46.241425   33734 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1207 20:41:46.241435   33734 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1207 20:41:46.241448   33734 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1207 20:41:46.241463   33734 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1207 20:41:46.241476   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:41:46.241487   33734 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1207 20:41:46.241499   33734 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1207 20:41:46.241510   33734 command_runner.go:130] > # the cgroup blockio controller.
	I1207 20:41:46.241520   33734 command_runner.go:130] > # blockio_config_file = ""
	I1207 20:41:46.241531   33734 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1207 20:41:46.241539   33734 command_runner.go:130] > # irqbalance daemon.
	I1207 20:41:46.241551   33734 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1207 20:41:46.241565   33734 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1207 20:41:46.241578   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:41:46.241588   33734 command_runner.go:130] > # rdt_config_file = ""
	I1207 20:41:46.241599   33734 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1207 20:41:46.241610   33734 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1207 20:41:46.241623   33734 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1207 20:41:46.241632   33734 command_runner.go:130] > # separate_pull_cgroup = ""
	I1207 20:41:46.241643   33734 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1207 20:41:46.241659   33734 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1207 20:41:46.241669   33734 command_runner.go:130] > # will be added.
	I1207 20:41:46.241680   33734 command_runner.go:130] > # default_capabilities = [
	I1207 20:41:46.241690   33734 command_runner.go:130] > # 	"CHOWN",
	I1207 20:41:46.241700   33734 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1207 20:41:46.241709   33734 command_runner.go:130] > # 	"FSETID",
	I1207 20:41:46.241719   33734 command_runner.go:130] > # 	"FOWNER",
	I1207 20:41:46.241728   33734 command_runner.go:130] > # 	"SETGID",
	I1207 20:41:46.241738   33734 command_runner.go:130] > # 	"SETUID",
	I1207 20:41:46.241747   33734 command_runner.go:130] > # 	"SETPCAP",
	I1207 20:41:46.241757   33734 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1207 20:41:46.241766   33734 command_runner.go:130] > # 	"KILL",
	I1207 20:41:46.241774   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241787   33734 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1207 20:41:46.241799   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:41:46.241808   33734 command_runner.go:130] > # default_sysctls = [
	I1207 20:41:46.241815   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241825   33734 command_runner.go:130] > # List of devices on the host that a
	I1207 20:41:46.241838   33734 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1207 20:41:46.241848   33734 command_runner.go:130] > # allowed_devices = [
	I1207 20:41:46.241857   33734 command_runner.go:130] > # 	"/dev/fuse",
	I1207 20:41:46.241866   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241876   33734 command_runner.go:130] > # List of additional devices. specified as
	I1207 20:41:46.241891   33734 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1207 20:41:46.241904   33734 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1207 20:41:46.241962   33734 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1207 20:41:46.241973   33734 command_runner.go:130] > # additional_devices = [
	I1207 20:41:46.241979   33734 command_runner.go:130] > # ]
	I1207 20:41:46.241987   33734 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1207 20:41:46.241995   33734 command_runner.go:130] > # cdi_spec_dirs = [
	I1207 20:41:46.241999   33734 command_runner.go:130] > # 	"/etc/cdi",
	I1207 20:41:46.242004   33734 command_runner.go:130] > # 	"/var/run/cdi",
	I1207 20:41:46.242010   33734 command_runner.go:130] > # ]
	I1207 20:41:46.242017   33734 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1207 20:41:46.242025   33734 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1207 20:41:46.242029   33734 command_runner.go:130] > # Defaults to false.
	I1207 20:41:46.242037   33734 command_runner.go:130] > # device_ownership_from_security_context = false
	I1207 20:41:46.242043   33734 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1207 20:41:46.242051   33734 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1207 20:41:46.242055   33734 command_runner.go:130] > # hooks_dir = [
	I1207 20:41:46.242062   33734 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1207 20:41:46.242067   33734 command_runner.go:130] > # ]
	I1207 20:41:46.242074   33734 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1207 20:41:46.242083   33734 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1207 20:41:46.242088   33734 command_runner.go:130] > # its default mounts from the following two files:
	I1207 20:41:46.242094   33734 command_runner.go:130] > #
	I1207 20:41:46.242101   33734 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1207 20:41:46.242109   33734 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1207 20:41:46.242117   33734 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1207 20:41:46.242121   33734 command_runner.go:130] > #
	I1207 20:41:46.242127   33734 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1207 20:41:46.242136   33734 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1207 20:41:46.242145   33734 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1207 20:41:46.242152   33734 command_runner.go:130] > #      only add mounts it finds in this file.
	I1207 20:41:46.242158   33734 command_runner.go:130] > #
	I1207 20:41:46.242162   33734 command_runner.go:130] > # default_mounts_file = ""
	I1207 20:41:46.242170   33734 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1207 20:41:46.242176   33734 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1207 20:41:46.242182   33734 command_runner.go:130] > pids_limit = 1024
	I1207 20:41:46.242188   33734 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1207 20:41:46.242196   33734 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1207 20:41:46.242205   33734 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1207 20:41:46.242215   33734 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1207 20:41:46.242221   33734 command_runner.go:130] > # log_size_max = -1
	I1207 20:41:46.242228   33734 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1207 20:41:46.242234   33734 command_runner.go:130] > # log_to_journald = false
	I1207 20:41:46.242240   33734 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1207 20:41:46.242247   33734 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1207 20:41:46.242253   33734 command_runner.go:130] > # Path to directory for container attach sockets.
	I1207 20:41:46.242261   33734 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1207 20:41:46.242266   33734 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1207 20:41:46.242273   33734 command_runner.go:130] > # bind_mount_prefix = ""
	I1207 20:41:46.242279   33734 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1207 20:41:46.242285   33734 command_runner.go:130] > # read_only = false
	I1207 20:41:46.242291   33734 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1207 20:41:46.242299   33734 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1207 20:41:46.242306   33734 command_runner.go:130] > # live configuration reload.
	I1207 20:41:46.242310   33734 command_runner.go:130] > # log_level = "info"
	I1207 20:41:46.242318   33734 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1207 20:41:46.242325   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:41:46.242329   33734 command_runner.go:130] > # log_filter = ""
	I1207 20:41:46.242337   33734 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1207 20:41:46.242349   33734 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1207 20:41:46.242356   33734 command_runner.go:130] > # separated by comma.
	I1207 20:41:46.242360   33734 command_runner.go:130] > # uid_mappings = ""
	I1207 20:41:46.242369   33734 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1207 20:41:46.242383   33734 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1207 20:41:46.242393   33734 command_runner.go:130] > # separated by comma.
	I1207 20:41:46.242402   33734 command_runner.go:130] > # gid_mappings = ""
	I1207 20:41:46.242415   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1207 20:41:46.242428   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:41:46.242441   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:41:46.242451   33734 command_runner.go:130] > # minimum_mappable_uid = -1
	I1207 20:41:46.242464   33734 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1207 20:41:46.242476   33734 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1207 20:41:46.242489   33734 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1207 20:41:46.242498   33734 command_runner.go:130] > # minimum_mappable_gid = -1
	I1207 20:41:46.242508   33734 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1207 20:41:46.242521   33734 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1207 20:41:46.242533   33734 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1207 20:41:46.242543   33734 command_runner.go:130] > # ctr_stop_timeout = 30
	I1207 20:41:46.242551   33734 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1207 20:41:46.242559   33734 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1207 20:41:46.242566   33734 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1207 20:41:46.242573   33734 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1207 20:41:46.242582   33734 command_runner.go:130] > drop_infra_ctr = false
	I1207 20:41:46.242588   33734 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1207 20:41:46.242596   33734 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1207 20:41:46.242604   33734 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1207 20:41:46.242610   33734 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1207 20:41:46.242616   33734 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1207 20:41:46.242623   33734 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1207 20:41:46.242631   33734 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1207 20:41:46.242638   33734 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1207 20:41:46.242644   33734 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1207 20:41:46.242651   33734 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1207 20:41:46.242659   33734 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1207 20:41:46.242668   33734 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1207 20:41:46.242674   33734 command_runner.go:130] > # default_runtime = "runc"
	I1207 20:41:46.242679   33734 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1207 20:41:46.242689   33734 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1207 20:41:46.242701   33734 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1207 20:41:46.242710   33734 command_runner.go:130] > # creation as a file is not desired either.
	I1207 20:41:46.242721   33734 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1207 20:41:46.242728   33734 command_runner.go:130] > # the hostname is being managed dynamically.
	I1207 20:41:46.242733   33734 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1207 20:41:46.242738   33734 command_runner.go:130] > # ]
	I1207 20:41:46.242745   33734 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1207 20:41:46.242753   33734 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1207 20:41:46.242761   33734 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1207 20:41:46.242769   33734 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1207 20:41:46.242773   33734 command_runner.go:130] > #
	I1207 20:41:46.242780   33734 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1207 20:41:46.242785   33734 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1207 20:41:46.242791   33734 command_runner.go:130] > #  runtime_type = "oci"
	I1207 20:41:46.242796   33734 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1207 20:41:46.242803   33734 command_runner.go:130] > #  privileged_without_host_devices = false
	I1207 20:41:46.242808   33734 command_runner.go:130] > #  allowed_annotations = []
	I1207 20:41:46.242813   33734 command_runner.go:130] > # Where:
	I1207 20:41:46.242819   33734 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1207 20:41:46.242828   33734 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1207 20:41:46.242836   33734 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1207 20:41:46.242844   33734 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1207 20:41:46.242851   33734 command_runner.go:130] > #   in $PATH.
	I1207 20:41:46.242857   33734 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1207 20:41:46.242864   33734 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1207 20:41:46.242870   33734 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1207 20:41:46.242876   33734 command_runner.go:130] > #   state.
	I1207 20:41:46.242886   33734 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1207 20:41:46.242899   33734 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1207 20:41:46.242911   33734 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1207 20:41:46.242925   33734 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1207 20:41:46.242938   33734 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1207 20:41:46.242952   33734 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1207 20:41:46.242962   33734 command_runner.go:130] > #   The currently recognized values are:
	I1207 20:41:46.242975   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1207 20:41:46.242990   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1207 20:41:46.243003   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1207 20:41:46.243015   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1207 20:41:46.243030   33734 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1207 20:41:46.243042   33734 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1207 20:41:46.243053   33734 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1207 20:41:46.243062   33734 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1207 20:41:46.243067   33734 command_runner.go:130] > #   should be moved to the container's cgroup
	I1207 20:41:46.243072   33734 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1207 20:41:46.243077   33734 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1207 20:41:46.243083   33734 command_runner.go:130] > runtime_type = "oci"
	I1207 20:41:46.243088   33734 command_runner.go:130] > runtime_root = "/run/runc"
	I1207 20:41:46.243093   33734 command_runner.go:130] > runtime_config_path = ""
	I1207 20:41:46.243097   33734 command_runner.go:130] > monitor_path = ""
	I1207 20:41:46.243103   33734 command_runner.go:130] > monitor_cgroup = ""
	I1207 20:41:46.243107   33734 command_runner.go:130] > monitor_exec_cgroup = ""
	I1207 20:41:46.243114   33734 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1207 20:41:46.243122   33734 command_runner.go:130] > # running containers
	I1207 20:41:46.243126   33734 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1207 20:41:46.243132   33734 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1207 20:41:46.243160   33734 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1207 20:41:46.243169   33734 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1207 20:41:46.243174   33734 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1207 20:41:46.243181   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1207 20:41:46.243186   33734 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1207 20:41:46.243192   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1207 20:41:46.243197   33734 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1207 20:41:46.243204   33734 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1207 20:41:46.243210   33734 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1207 20:41:46.243217   33734 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1207 20:41:46.243224   33734 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1207 20:41:46.243233   33734 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1207 20:41:46.243243   33734 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1207 20:41:46.243251   33734 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1207 20:41:46.243262   33734 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1207 20:41:46.243272   33734 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1207 20:41:46.243280   33734 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1207 20:41:46.243289   33734 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1207 20:41:46.243293   33734 command_runner.go:130] > # Example:
	I1207 20:41:46.243300   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1207 20:41:46.243305   33734 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1207 20:41:46.243312   33734 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1207 20:41:46.243319   33734 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1207 20:41:46.243324   33734 command_runner.go:130] > # cpuset = 0
	I1207 20:41:46.243328   33734 command_runner.go:130] > # cpushares = "0-1"
	I1207 20:41:46.243334   33734 command_runner.go:130] > # Where:
	I1207 20:41:46.243339   33734 command_runner.go:130] > # The workload name is workload-type.
	I1207 20:41:46.243351   33734 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1207 20:41:46.243359   33734 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1207 20:41:46.243367   33734 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1207 20:41:46.243374   33734 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1207 20:41:46.243382   33734 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1207 20:41:46.243388   33734 command_runner.go:130] > # 
	I1207 20:41:46.243394   33734 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1207 20:41:46.243399   33734 command_runner.go:130] > #
	I1207 20:41:46.243405   33734 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1207 20:41:46.243415   33734 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1207 20:41:46.243424   33734 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1207 20:41:46.243432   33734 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1207 20:41:46.243440   33734 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1207 20:41:46.243446   33734 command_runner.go:130] > [crio.image]
	I1207 20:41:46.243452   33734 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1207 20:41:46.243458   33734 command_runner.go:130] > # default_transport = "docker://"
	I1207 20:41:46.243464   33734 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1207 20:41:46.243473   33734 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:41:46.243477   33734 command_runner.go:130] > # global_auth_file = ""
	I1207 20:41:46.243485   33734 command_runner.go:130] > # The image used to instantiate infra containers.
	I1207 20:41:46.243490   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:41:46.243497   33734 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1207 20:41:46.243504   33734 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1207 20:41:46.243512   33734 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1207 20:41:46.243519   33734 command_runner.go:130] > # This option supports live configuration reload.
	I1207 20:41:46.243526   33734 command_runner.go:130] > # pause_image_auth_file = ""
	I1207 20:41:46.243532   33734 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1207 20:41:46.243543   33734 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1207 20:41:46.243556   33734 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1207 20:41:46.243569   33734 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1207 20:41:46.243578   33734 command_runner.go:130] > # pause_command = "/pause"
	I1207 20:41:46.243591   33734 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1207 20:41:46.243605   33734 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1207 20:41:46.243617   33734 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1207 20:41:46.243632   33734 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1207 20:41:46.243643   33734 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1207 20:41:46.243652   33734 command_runner.go:130] > # signature_policy = ""
	I1207 20:41:46.243664   33734 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1207 20:41:46.243678   33734 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1207 20:41:46.243687   33734 command_runner.go:130] > # changing them here.
	I1207 20:41:46.243698   33734 command_runner.go:130] > # insecure_registries = [
	I1207 20:41:46.243707   33734 command_runner.go:130] > # ]
	I1207 20:41:46.243725   33734 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1207 20:41:46.243737   33734 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1207 20:41:46.243747   33734 command_runner.go:130] > # image_volumes = "mkdir"
	I1207 20:41:46.243759   33734 command_runner.go:130] > # Temporary directory to use for storing big files
	I1207 20:41:46.243767   33734 command_runner.go:130] > # big_files_temporary_dir = ""
	I1207 20:41:46.243773   33734 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1207 20:41:46.243780   33734 command_runner.go:130] > # CNI plugins.
	I1207 20:41:46.243784   33734 command_runner.go:130] > [crio.network]
	I1207 20:41:46.243790   33734 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1207 20:41:46.243798   33734 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1207 20:41:46.243802   33734 command_runner.go:130] > # cni_default_network = ""
	I1207 20:41:46.243810   33734 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1207 20:41:46.243815   33734 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1207 20:41:46.243823   33734 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1207 20:41:46.243829   33734 command_runner.go:130] > # plugin_dirs = [
	I1207 20:41:46.243833   33734 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1207 20:41:46.243839   33734 command_runner.go:130] > # ]
	I1207 20:41:46.243845   33734 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1207 20:41:46.243851   33734 command_runner.go:130] > [crio.metrics]
	I1207 20:41:46.243856   33734 command_runner.go:130] > # Globally enable or disable metrics support.
	I1207 20:41:46.243862   33734 command_runner.go:130] > enable_metrics = true
	I1207 20:41:46.243867   33734 command_runner.go:130] > # Specify enabled metrics collectors.
	I1207 20:41:46.243874   33734 command_runner.go:130] > # Per default all metrics are enabled.
	I1207 20:41:46.243880   33734 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1207 20:41:46.243888   33734 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1207 20:41:46.243894   33734 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1207 20:41:46.243900   33734 command_runner.go:130] > # metrics_collectors = [
	I1207 20:41:46.243904   33734 command_runner.go:130] > # 	"operations",
	I1207 20:41:46.243911   33734 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1207 20:41:46.243915   33734 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1207 20:41:46.243922   33734 command_runner.go:130] > # 	"operations_errors",
	I1207 20:41:46.243926   33734 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1207 20:41:46.243932   33734 command_runner.go:130] > # 	"image_pulls_by_name",
	I1207 20:41:46.243937   33734 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1207 20:41:46.243943   33734 command_runner.go:130] > # 	"image_pulls_failures",
	I1207 20:41:46.243948   33734 command_runner.go:130] > # 	"image_pulls_successes",
	I1207 20:41:46.243954   33734 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1207 20:41:46.243958   33734 command_runner.go:130] > # 	"image_layer_reuse",
	I1207 20:41:46.243966   33734 command_runner.go:130] > # 	"containers_oom_total",
	I1207 20:41:46.243970   33734 command_runner.go:130] > # 	"containers_oom",
	I1207 20:41:46.243975   33734 command_runner.go:130] > # 	"processes_defunct",
	I1207 20:41:46.243979   33734 command_runner.go:130] > # 	"operations_total",
	I1207 20:41:46.243983   33734 command_runner.go:130] > # 	"operations_latency_seconds",
	I1207 20:41:46.243987   33734 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1207 20:41:46.243994   33734 command_runner.go:130] > # 	"operations_errors_total",
	I1207 20:41:46.243998   33734 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1207 20:41:46.244002   33734 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1207 20:41:46.244009   33734 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1207 20:41:46.244013   33734 command_runner.go:130] > # 	"image_pulls_success_total",
	I1207 20:41:46.244020   33734 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1207 20:41:46.244024   33734 command_runner.go:130] > # 	"containers_oom_count_total",
	I1207 20:41:46.244030   33734 command_runner.go:130] > # ]
	I1207 20:41:46.244035   33734 command_runner.go:130] > # The port on which the metrics server will listen.
	I1207 20:41:46.244041   33734 command_runner.go:130] > # metrics_port = 9090
	I1207 20:41:46.244046   33734 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1207 20:41:46.244052   33734 command_runner.go:130] > # metrics_socket = ""
	I1207 20:41:46.244057   33734 command_runner.go:130] > # The certificate for the secure metrics server.
	I1207 20:41:46.244066   33734 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1207 20:41:46.244072   33734 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1207 20:41:46.244079   33734 command_runner.go:130] > # certificate on any modification event.
	I1207 20:41:46.244082   33734 command_runner.go:130] > # metrics_cert = ""
	I1207 20:41:46.244090   33734 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1207 20:41:46.244095   33734 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1207 20:41:46.244101   33734 command_runner.go:130] > # metrics_key = ""
	I1207 20:41:46.244107   33734 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1207 20:41:46.244113   33734 command_runner.go:130] > [crio.tracing]
	I1207 20:41:46.244118   33734 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1207 20:41:46.244124   33734 command_runner.go:130] > # enable_tracing = false
	I1207 20:41:46.244131   33734 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1207 20:41:46.244137   33734 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1207 20:41:46.244143   33734 command_runner.go:130] > # Number of samples to collect per million spans.
	I1207 20:41:46.244149   33734 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1207 20:41:46.244155   33734 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1207 20:41:46.244161   33734 command_runner.go:130] > [crio.stats]
	I1207 20:41:46.244167   33734 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1207 20:41:46.244175   33734 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1207 20:41:46.244179   33734 command_runner.go:130] > # stats_collection_period = 0
	I1207 20:41:46.244235   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:41:46.244243   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:41:46.244252   33734 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:41:46.244272   33734 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-660958 NodeName:multinode-660958-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:41:46.244374   33734 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-660958-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:41:46.244423   33734 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-660958-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:41:46.244468   33734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:41:46.253374   33734 command_runner.go:130] > kubeadm
	I1207 20:41:46.253393   33734 command_runner.go:130] > kubectl
	I1207 20:41:46.253400   33734 command_runner.go:130] > kubelet
	I1207 20:41:46.253458   33734 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:41:46.253506   33734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 20:41:46.262264   33734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1207 20:41:46.278889   33734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:41:46.294683   33734 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I1207 20:41:46.298368   33734 command_runner.go:130] > 192.168.39.19	control-plane.minikube.internal
	I1207 20:41:46.298425   33734 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:41:46.298692   33734 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:41:46.298783   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:41:46.298822   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:41:46.313225   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41865
	I1207 20:41:46.313598   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:41:46.314075   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:41:46.314100   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:41:46.314411   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:41:46.314543   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:41:46.314696   33734 start.go:304] JoinCluster: &{Name:multinode-660958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-660958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:41:46.314803   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1207 20:41:46.314826   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:41:46.317738   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:41:46.318248   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:41:46.318277   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:41:46.318414   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:41:46.318565   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:41:46.318714   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:41:46.318827   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:41:46.500781   33734 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h1etcu.udpsg5kexzgcew6v --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 20:41:46.500889   33734 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 20:41:46.500930   33734 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:41:46.501238   33734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:41:46.501283   33734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:41:46.515443   33734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
	I1207 20:41:46.515849   33734 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:41:46.516272   33734 main.go:141] libmachine: Using API Version  1
	I1207 20:41:46.516292   33734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:41:46.516596   33734 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:41:46.516761   33734 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:41:46.516933   33734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-660958-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1207 20:41:46.516953   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:41:46.519547   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:41:46.519927   33734 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:41:46.519960   33734 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:41:46.520103   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:41:46.520239   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:41:46.520361   33734 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:41:46.520480   33734 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:41:46.750590   33734 command_runner.go:130] > node/multinode-660958-m03 cordoned
	I1207 20:41:49.783704   33734 command_runner.go:130] > pod "busybox-5bc68d56bd-pzzgm" has DeletionTimestamp older than 1 seconds, skipping
	I1207 20:41:49.783736   33734 command_runner.go:130] > node/multinode-660958-m03 drained
	I1207 20:41:49.785442   33734 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1207 20:41:49.785459   33734 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6flr5, kube-system/kube-proxy-mjptg
	I1207 20:41:49.785495   33734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-660958-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.268526426s)
	I1207 20:41:49.785513   33734 node.go:108] successfully drained node "m03"
	I1207 20:41:49.785895   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:41:49.786167   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:41:49.786488   33734 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1207 20:41:49.786536   33734 round_trippers.go:463] DELETE https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:49.786544   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:49.786551   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:49.786559   33734 round_trippers.go:473]     Content-Type: application/json
	I1207 20:41:49.786567   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:49.798511   33734 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1207 20:41:49.798527   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:49.798534   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:49.798539   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:49.798545   33734 round_trippers.go:580]     Content-Length: 171
	I1207 20:41:49.798549   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:49 GMT
	I1207 20:41:49.798554   33734 round_trippers.go:580]     Audit-Id: 29594caf-959c-4f4e-af76-7fd3a0551761
	I1207 20:41:49.798559   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:49.798564   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:49.798582   33734 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-660958-m03","kind":"nodes","uid":"99d6ae8d-c617-438e-918b-4f4d3c4699de"}}
	I1207 20:41:49.798612   33734 node.go:124] successfully deleted node "m03"
	I1207 20:41:49.798627   33734 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 20:41:49.798654   33734 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 20:41:49.798675   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1etcu.udpsg5kexzgcew6v --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-660958-m03"
	I1207 20:41:49.850797   33734 command_runner.go:130] > [preflight] Running pre-flight checks
	I1207 20:41:50.011998   33734 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1207 20:41:50.012033   33734 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1207 20:41:50.076201   33734 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:41:50.076252   33734 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:41:50.076682   33734 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1207 20:41:50.223337   33734 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1207 20:41:50.748310   33734 command_runner.go:130] > This node has joined the cluster:
	I1207 20:41:50.748338   33734 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1207 20:41:50.748350   33734 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1207 20:41:50.748361   33734 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1207 20:41:50.751400   33734 command_runner.go:130] ! W1207 20:41:49.842744    2395 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1207 20:41:50.751428   33734 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1207 20:41:50.751440   33734 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1207 20:41:50.751453   33734 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1207 20:41:50.751478   33734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1207 20:41:51.010023   33734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=multinode-660958 minikube.k8s.io/updated_at=2023_12_07T20_41_51_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:41:51.110681   33734 command_runner.go:130] > node/multinode-660958-m02 labeled
	I1207 20:41:51.125710   33734 command_runner.go:130] > node/multinode-660958-m03 labeled
	I1207 20:41:51.127569   33734 start.go:306] JoinCluster complete in 4.812871993s
	I1207 20:41:51.127587   33734 cni.go:84] Creating CNI manager for ""
	I1207 20:41:51.127594   33734 cni.go:136] 3 nodes found, recommending kindnet
	I1207 20:41:51.127645   33734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 20:41:51.134040   33734 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1207 20:41:51.134063   33734 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1207 20:41:51.134076   33734 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1207 20:41:51.134085   33734 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1207 20:41:51.134092   33734 command_runner.go:130] > Access: 2023-12-07 20:37:40.624910444 +0000
	I1207 20:41:51.134098   33734 command_runner.go:130] > Modify: 2023-12-05 19:27:41.000000000 +0000
	I1207 20:41:51.134107   33734 command_runner.go:130] > Change: 2023-12-07 20:37:38.610910444 +0000
	I1207 20:41:51.134117   33734 command_runner.go:130] >  Birth: -
	I1207 20:41:51.134589   33734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 20:41:51.134605   33734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 20:41:51.154349   33734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 20:41:51.534472   33734 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:41:51.538416   33734 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1207 20:41:51.541867   33734 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1207 20:41:51.551027   33734 command_runner.go:130] > daemonset.apps/kindnet configured
	I1207 20:41:51.553695   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:41:51.553976   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:41:51.554245   33734 round_trippers.go:463] GET https://192.168.39.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1207 20:41:51.554259   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.554270   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.554279   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.556080   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.556101   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.556111   33734 round_trippers.go:580]     Audit-Id: f7ef8af8-04a4-4472-8428-ebce61706a97
	I1207 20:41:51.556121   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.556134   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.556144   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.556158   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.556174   33734 round_trippers.go:580]     Content-Length: 291
	I1207 20:41:51.556180   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.556199   33734 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d249b622-1ef8-42db-b860-e5219d7241f8","resourceVersion":"883","creationTimestamp":"2023-12-07T20:27:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1207 20:41:51.556280   33734 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-660958" context rescaled to 1 replicas
	I1207 20:41:51.556305   33734 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.20 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 20:41:51.558149   33734 out.go:177] * Verifying Kubernetes components...
	I1207 20:41:51.559551   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:41:51.573836   33734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:41:51.574210   33734 kapi.go:59] client config for multinode-660958: &rest.Config{Host:"https://192.168.39.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/multinode-660958/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:41:51.574473   33734 node_ready.go:35] waiting up to 6m0s for node "multinode-660958-m03" to be "Ready" ...
	I1207 20:41:51.574536   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:51.574544   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.574556   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.574564   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.578252   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:51.578269   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.578278   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.578286   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.578294   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.578303   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.578315   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.578325   33734 round_trippers.go:580]     Audit-Id: 56f025a1-b4c2-4b4e-9e04-09e49614f953
	I1207 20:41:51.578547   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"048946f1-9cd2-47c1-9995-ee3b79818aaf","resourceVersion":"1212","creationTimestamp":"2023-12-07T20:41:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_41_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:41:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1207 20:41:51.578787   33734 node_ready.go:49] node "multinode-660958-m03" has status "Ready":"True"
	I1207 20:41:51.578802   33734 node_ready.go:38] duration metric: took 4.311129ms waiting for node "multinode-660958-m03" to be "Ready" ...
	I1207 20:41:51.578812   33734 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:41:51.578867   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1207 20:41:51.578877   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.578890   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.578904   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.582554   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:51.582574   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.582583   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.582591   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.582598   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.582613   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.582625   33734 round_trippers.go:580]     Audit-Id: 7130a955-c5ac-41e3-9fdf-764f1fc5f2e5
	I1207 20:41:51.582633   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.583592   33734 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1219"},"items":[{"metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82031 chars]
	I1207 20:41:51.585901   33734 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.585981   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-7mss7
	I1207 20:41:51.585993   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.586000   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.586006   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.587816   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.587835   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.587849   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.587858   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.587866   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.587874   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.587882   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.587893   33734 round_trippers.go:580]     Audit-Id: 8c5302ed-433b-4fd2-9b92-769592075d22
	I1207 20:41:51.588060   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-7mss7","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6d6632ea-9aae-43e7-8b17-56399870082b","resourceVersion":"879","creationTimestamp":"2023-12-07T20:27:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"837b716c-efd3-40d5-a3f6-5cf05158f16d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"837b716c-efd3-40d5-a3f6-5cf05158f16d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1207 20:41:51.588504   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:51.588521   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.588530   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.588536   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.590457   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.590475   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.590484   33734 round_trippers.go:580]     Audit-Id: c52a676c-d38a-498c-8582-79952c1065b4
	I1207 20:41:51.590492   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.590500   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.590509   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.590516   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.590523   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.590713   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:51.591035   33734 pod_ready.go:92] pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:51.591052   33734 pod_ready.go:81] duration metric: took 5.135263ms waiting for pod "coredns-5dd5756b68-7mss7" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.591062   33734 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.591106   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-660958
	I1207 20:41:51.591117   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.591127   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.591136   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.592809   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.592825   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.592834   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.592842   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.592855   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.592863   33734 round_trippers.go:580]     Audit-Id: 94844adc-6dfd-4f7a-9094-517448c97f0a
	I1207 20:41:51.592871   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.592879   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.593253   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-660958","namespace":"kube-system","uid":"997363d1-ef51-46b9-98ad-276aa803f3a8","resourceVersion":"852","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.19:2379","kubernetes.io/config.hash":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.mirror":"8b7abfcd2f221a7da3eb913c0d8d4a01","kubernetes.io/config.seen":"2023-12-07T20:27:35.772724909Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1207 20:41:51.593645   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:51.593660   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.593671   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.593680   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.595694   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:51.595706   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.595711   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.595717   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.595722   33734 round_trippers.go:580]     Audit-Id: 1a50969d-1ea9-4fe4-9f73-00cad0d0cebf
	I1207 20:41:51.595727   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.595732   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.595736   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.596012   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:51.596267   33734 pod_ready.go:92] pod "etcd-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:51.596279   33734 pod_ready.go:81] duration metric: took 5.211933ms waiting for pod "etcd-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.596292   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.596333   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-660958
	I1207 20:41:51.596344   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.596354   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.596362   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.598016   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.598034   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.598043   33734 round_trippers.go:580]     Audit-Id: 0650d945-e682-4501-a9df-a18685b796a3
	I1207 20:41:51.598051   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.598058   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.598067   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.598075   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.598083   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.598346   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-660958","namespace":"kube-system","uid":"ab5b9260-db2a-4625-aff0-8b0fcf6a74a8","resourceVersion":"856","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.19:8443","kubernetes.io/config.hash":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.mirror":"3be2f0b39689e91f9171b575c679c7c3","kubernetes.io/config.seen":"2023-12-07T20:27:35.772728261Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1207 20:41:51.598768   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:51.598783   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.598793   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.598803   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.600472   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.600487   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.600494   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.600501   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.600506   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.600511   33734 round_trippers.go:580]     Audit-Id: af3fb665-33f1-4d1f-827f-9d4d29ec9c77
	I1207 20:41:51.600516   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.600521   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.600750   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:51.601022   33734 pod_ready.go:92] pod "kube-apiserver-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:51.601033   33734 pod_ready.go:81] duration metric: took 4.737136ms waiting for pod "kube-apiserver-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.601040   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.601079   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-660958
	I1207 20:41:51.601089   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.601100   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.601109   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.605302   33734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1207 20:41:51.605318   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.605328   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.605335   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.605343   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.605351   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.605360   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.605368   33734 round_trippers.go:580]     Audit-Id: 19c16547-51b7-468c-8a81-2ddf53e8e180
	I1207 20:41:51.605527   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-660958","namespace":"kube-system","uid":"fb58a1b4-61c1-41c6-b3af-824cc7a08c14","resourceVersion":"871","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.mirror":"252eef32247c5aa4e495d2fdf0fe1947","kubernetes.io/config.seen":"2023-12-07T20:27:35.772729377Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1207 20:41:51.605852   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:51.605864   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.605871   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.605877   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.607703   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:51.607721   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.607729   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.607737   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.607745   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.607754   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.607761   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.607768   33734 round_trippers.go:580]     Audit-Id: 76d968c1-91c0-4b11-b90b-5954103b65ee
	I1207 20:41:51.607927   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:51.608215   33734 pod_ready.go:92] pod "kube-controller-manager-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:51.608228   33734 pod_ready.go:81] duration metric: took 7.183052ms waiting for pod "kube-controller-manager-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.608236   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:51.775635   33734 request.go:629] Waited for 167.353016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:41:51.775686   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:41:51.775691   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.775698   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.775704   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.779196   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:51.779222   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.779233   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.779242   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.779252   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.779263   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.779272   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.779287   33734 round_trippers.go:580]     Audit-Id: d484d01b-743a-43b4-9558-2a6020abd544
	I1207 20:41:51.779868   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"1216","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1207 20:41:51.974643   33734 request.go:629] Waited for 194.32782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:51.974710   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:51.974717   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:51.974725   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:51.974732   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:51.977398   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:51.977422   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:51.977432   33734 round_trippers.go:580]     Audit-Id: 30df0d8e-a0ae-4ae5-8bb9-bdd1643d6af0
	I1207 20:41:51.977440   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:51.977448   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:51.977457   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:51.977465   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:51.977477   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:51 GMT
	I1207 20:41:51.977681   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"048946f1-9cd2-47c1-9995-ee3b79818aaf","resourceVersion":"1212","creationTimestamp":"2023-12-07T20:41:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_41_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:41:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1207 20:41:52.175466   33734 request.go:629] Waited for 197.361093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:41:52.175546   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:41:52.175558   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:52.175571   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:52.175585   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:52.178713   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:52.178740   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:52.178749   33734 round_trippers.go:580]     Audit-Id: e05165d2-b9f1-4354-bf04-586de8ba9eb4
	I1207 20:41:52.178756   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:52.178764   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:52.178771   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:52.178781   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:52.178793   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:52 GMT
	I1207 20:41:52.179023   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"1216","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1207 20:41:52.374790   33734 request.go:629] Waited for 195.30631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:52.374844   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:52.374848   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:52.374856   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:52.374864   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:52.377800   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:52.377824   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:52.377834   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:52.377841   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:52.377848   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:52.377876   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:52 GMT
	I1207 20:41:52.377889   33734 round_trippers.go:580]     Audit-Id: 908796f4-c689-4a8e-8839-53c122184dd9
	I1207 20:41:52.377901   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:52.378032   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"048946f1-9cd2-47c1-9995-ee3b79818aaf","resourceVersion":"1212","creationTimestamp":"2023-12-07T20:41:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_41_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:41:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1207 20:41:52.879256   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjptg
	I1207 20:41:52.879287   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:52.879299   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:52.879309   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:52.881850   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:52.881873   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:52.881883   33734 round_trippers.go:580]     Audit-Id: a122acfa-3524-45e8-a292-f9b0fe4a0ab6
	I1207 20:41:52.881891   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:52.881899   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:52.881906   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:52.881913   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:52.881936   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:52 GMT
	I1207 20:41:52.882112   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjptg","generateName":"kube-proxy-","namespace":"kube-system","uid":"1f4f9d19-e657-4472-a434-2e0810ba6cf3","resourceVersion":"1233","creationTimestamp":"2023-12-07T20:29:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:29:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1207 20:41:52.882546   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m03
	I1207 20:41:52.882560   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:52.882568   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:52.882573   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:52.884578   33734 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1207 20:41:52.884598   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:52.884607   33734 round_trippers.go:580]     Audit-Id: 9ccfe268-1081-4e1f-a93a-ff4400ee5d0d
	I1207 20:41:52.884615   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:52.884624   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:52.884631   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:52.884643   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:52.884655   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:52 GMT
	I1207 20:41:52.884782   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m03","uid":"048946f1-9cd2-47c1-9995-ee3b79818aaf","resourceVersion":"1212","creationTimestamp":"2023-12-07T20:41:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_41_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:41:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1207 20:41:52.885089   33734 pod_ready.go:92] pod "kube-proxy-mjptg" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:52.885111   33734 pod_ready.go:81] duration metric: took 1.276868497s waiting for pod "kube-proxy-mjptg" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:52.885125   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:52.975468   33734 request.go:629] Waited for 90.259748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:41:52.975520   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfc45
	I1207 20:41:52.975527   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:52.975539   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:52.975607   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:52.978591   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:52.978615   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:52.978625   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:52.978634   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:52.978641   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:52.978648   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:52.978659   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:52 GMT
	I1207 20:41:52.978679   33734 round_trippers.go:580]     Audit-Id: 34172946-bd8b-49b1-b9fa-75db85add850
	I1207 20:41:52.978871   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pfc45","generateName":"kube-proxy-","namespace":"kube-system","uid":"1e39fc15-3b2e-418c-92f1-32570e3bd853","resourceVersion":"789","creationTimestamp":"2023-12-07T20:27:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1207 20:41:53.175612   33734 request.go:629] Waited for 196.370026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:53.175678   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:53.175683   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:53.175690   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:53.175696   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:53.178740   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:53.178758   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:53.178765   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:53.178778   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:53.178785   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:53.178795   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:53 GMT
	I1207 20:41:53.178805   33734 round_trippers.go:580]     Audit-Id: b7231ae2-516d-4246-99dc-8428279bf879
	I1207 20:41:53.178812   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:53.178982   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:53.179349   33734 pod_ready.go:92] pod "kube-proxy-pfc45" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:53.179367   33734 pod_ready.go:81] duration metric: took 294.234151ms waiting for pod "kube-proxy-pfc45" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:53.179375   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:53.374704   33734 request.go:629] Waited for 195.279632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:41:53.374759   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxqfp
	I1207 20:41:53.374764   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:53.374772   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:53.374778   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:53.378058   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:53.378082   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:53.378091   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:53.378100   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:53 GMT
	I1207 20:41:53.378107   33734 round_trippers.go:580]     Audit-Id: fef7dfe7-4286-4fe5-be71-cf84f104f2c6
	I1207 20:41:53.378117   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:53.378125   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:53.378133   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:53.378501   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxqfp","generateName":"kube-proxy-","namespace":"kube-system","uid":"c06f17e2-4050-4554-8c4a-057bca0bb5ff","resourceVersion":"1051","creationTimestamp":"2023-12-07T20:28:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02916f75-8bbf-402b-b98d-7538cf8a479a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:28:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02916f75-8bbf-402b-b98d-7538cf8a479a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1207 20:41:53.575258   33734 request.go:629] Waited for 196.379188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:41:53.575311   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958-m02
	I1207 20:41:53.575316   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:53.575324   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:53.575330   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:53.578384   33734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1207 20:41:53.578401   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:53.578408   33734 round_trippers.go:580]     Audit-Id: 868b720b-400f-4a41-b75a-fd5f38d6a06a
	I1207 20:41:53.578414   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:53.578429   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:53.578436   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:53.578446   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:53.578455   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:53 GMT
	I1207 20:41:53.578901   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958-m02","uid":"1301da57-5d53-4659-aa78-22c7b081e11a","resourceVersion":"1211","creationTimestamp":"2023-12-07T20:40:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_07T20_41_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:40:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I1207 20:41:53.579157   33734 pod_ready.go:92] pod "kube-proxy-rxqfp" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:53.579171   33734 pod_ready.go:81] duration metric: took 399.791351ms waiting for pod "kube-proxy-rxqfp" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:53.579179   33734 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:53.775486   33734 request.go:629] Waited for 196.241179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:41:53.775543   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-660958
	I1207 20:41:53.775548   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:53.775555   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:53.775561   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:53.778394   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:53.778410   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:53.778424   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:53.778433   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:53.778444   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:53.778453   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:53 GMT
	I1207 20:41:53.778462   33734 round_trippers.go:580]     Audit-Id: c48e91be-f9ab-4712-a73f-4c490d5f92a5
	I1207 20:41:53.778473   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:53.778794   33734 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-660958","namespace":"kube-system","uid":"ff5eb685-6086-4a98-b3b9-a485746dcbd4","resourceVersion":"849","creationTimestamp":"2023-12-07T20:27:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.mirror":"36460e92ca68c41cc5386b5bee9ca633","kubernetes.io/config.seen":"2023-12-07T20:27:35.772730586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-07T20:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1207 20:41:53.975516   33734 request.go:629] Waited for 196.404953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:53.975578   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/multinode-660958
	I1207 20:41:53.975583   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:53.975593   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:53.975599   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:53.978449   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:53.978469   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:53.978475   33734 round_trippers.go:580]     Audit-Id: 3ad2538a-d418-4785-9666-10c7a8afc2c3
	I1207 20:41:53.978481   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:53.978491   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:53.978499   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:53.978509   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:53.978517   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:53 GMT
	I1207 20:41:53.978888   33734 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-07T20:27:32Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1207 20:41:53.979238   33734 pod_ready.go:92] pod "kube-scheduler-multinode-660958" in "kube-system" namespace has status "Ready":"True"
	I1207 20:41:53.979254   33734 pod_ready.go:81] duration metric: took 400.069139ms waiting for pod "kube-scheduler-multinode-660958" in "kube-system" namespace to be "Ready" ...
	I1207 20:41:53.979264   33734 pod_ready.go:38] duration metric: took 2.400438961s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:41:53.979280   33734 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:41:53.979329   33734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:41:53.992709   33734 system_svc.go:56] duration metric: took 13.425028ms WaitForService to wait for kubelet.
	I1207 20:41:53.992732   33734 kubeadm.go:581] duration metric: took 2.436398939s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:41:53.992753   33734 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:41:54.175072   33734 request.go:629] Waited for 182.255445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1207 20:41:54.175127   33734 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1207 20:41:54.175133   33734 round_trippers.go:469] Request Headers:
	I1207 20:41:54.175141   33734 round_trippers.go:473]     Accept: application/json, */*
	I1207 20:41:54.175147   33734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1207 20:41:54.178030   33734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1207 20:41:54.178055   33734 round_trippers.go:577] Response Headers:
	I1207 20:41:54.178066   33734 round_trippers.go:580]     Audit-Id: 8c07afc4-5564-412f-973a-e963d67cd3be
	I1207 20:41:54.178075   33734 round_trippers.go:580]     Cache-Control: no-cache, private
	I1207 20:41:54.178084   33734 round_trippers.go:580]     Content-Type: application/json
	I1207 20:41:54.178092   33734 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5a7a214b-bfc0-4c3a-aeba-a2b8a0e6fe12
	I1207 20:41:54.178098   33734 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fbecbaab-677c-4833-9e36-6870f48b864f
	I1207 20:41:54.178114   33734 round_trippers.go:580]     Date: Thu, 07 Dec 2023 20:41:54 GMT
	I1207 20:41:54.178685   33734 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1236"},"items":[{"metadata":{"name":"multinode-660958","uid":"f16e69f6-54d2-43bb-8775-374952a81795","resourceVersion":"900","creationTimestamp":"2023-12-07T20:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-660958","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9ef2cce417fa3e029706bd52eaf40ea89608b2c","minikube.k8s.io/name":"multinode-660958","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_07T20_27_36_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16465 chars]
	I1207 20:41:54.179477   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:41:54.179499   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:41:54.179510   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:41:54.179515   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:41:54.179518   33734 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:41:54.179523   33734 node_conditions.go:123] node cpu capacity is 2
	I1207 20:41:54.179531   33734 node_conditions.go:105] duration metric: took 186.77231ms to run NodePressure ...
	I1207 20:41:54.179546   33734 start.go:228] waiting for startup goroutines ...
	I1207 20:41:54.179575   33734 start.go:242] writing updated cluster config ...
	I1207 20:41:54.179898   33734 ssh_runner.go:195] Run: rm -f paused
	I1207 20:41:54.226157   33734 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 20:41:54.229037   33734 out.go:177] * Done! kubectl is now configured to use "multinode-660958" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 20:37:39 UTC, ends at Thu 2023-12-07 20:41:55 UTC. --
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.352026118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c38d1fbc-ed23-4eb6-b0c3-52719c50fd4b name=/runtime.v1.RuntimeService/Version
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.354469647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c7484601-de55-4686-a3a3-3057169b8fab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.355011489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701981715354995597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c7484601-de55-4686-a3a3-3057169b8fab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.355826081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e66d4533-d038-4aa3-8982-35b79fa9a3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.355871268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e66d4533-d038-4aa3-8982-35b79fa9a3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.356163048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2db32c9fb9333fbc8a531d270ca6af64b1b0fd87b72475e07f3c2e990b2cfb,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701981523612889254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6158bce6682fcede5fea0e3d4223b6f51b66916e90843ab9b06ec653f63932e2,PodSandboxId:a2cb571b3359dcd001b4575f44c48b14679e1a0c8e05b76490a3de812f4325b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701981510872345718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4027f020f41ba3d7d801c905da4e5ec37417950b4aa14cf794e4265ddd1ca884,PodSandboxId:df1f9b098b96efd95634ec1ac89c1464eb019ebf0d7d193a94c9bfb3ec64171c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701981507786190589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07079667924c1ad44e38f3b70bca49240c1fd6d0ae6338e946fe995713040ace,PodSandboxId:de025ce1255abe9f20b7c668ebaf5142df1089d26798bbc7ff0e334305177b9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701981494684052899,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006dd2ca7aa03e900443ce7d00fce61a0660701c11139ee1b711c44857c02dc3,PodSandboxId:7ec975c2a24bd8bc50b7c94f2bba0bf5d237aec95350c284c49a8dddf6e46e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701981492537538528,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f4a64be23c4d97f11c819cef00876275801554fbc32e8997b22bd6cdc6c1f7,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701981492514792200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5
d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36c279f8e416a3a4aa51f05ee2c235fd302b162ae64e33fa8f9eafe81efc6bc,PodSandboxId:601be80f00d7074c353bdbe54727a828416e4b1c6398634a37ac56c9f7cdf0a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701981486982125193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c927eda8b55affd773b6add0d06db38b55d32205eb7b82130ccefeb809c8c7f,PodSandboxId:0698bded6de3ac73488735cc88efca9376e090a8c9fb6042c311dbe0450b1562,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701981486883412403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.container.hash
: 9af16bb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d36ac825c0eac9306f365adf49ac4e5d441cce6c96c6dc2758d8f43fc27d00,PodSandboxId:40580bbd7150c50bdc913eae570b1a6c312a2350d1764e6c89d37c6747b7aa53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701981486495505031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 251fd5a2,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc02b2a29f14c532141bf59f0403267af23dba558c83406d89ea96944dafb71,PodSandboxId:c4741cf8634891655c73e74b26531be9b7a180d0dafb2fa9f7d1d51783b09e09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701981486467737901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e66d4533-d038-4aa3-8982-35b79fa9a3ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.394363589Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=04c1dcef-6da9-440f-ad2a-49f79c5125b0 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.394447693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=04c1dcef-6da9-440f-ad2a-49f79c5125b0 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.395468187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6870ae2c-d491-4dda-8a08-676a997d10c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.396019547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701981715396004674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6870ae2c-d491-4dda-8a08-676a997d10c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.397089390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aa296651-7549-495e-8db6-38b6d5b50a63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.397159189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa296651-7549-495e-8db6-38b6d5b50a63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.397354858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2db32c9fb9333fbc8a531d270ca6af64b1b0fd87b72475e07f3c2e990b2cfb,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701981523612889254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6158bce6682fcede5fea0e3d4223b6f51b66916e90843ab9b06ec653f63932e2,PodSandboxId:a2cb571b3359dcd001b4575f44c48b14679e1a0c8e05b76490a3de812f4325b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701981510872345718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4027f020f41ba3d7d801c905da4e5ec37417950b4aa14cf794e4265ddd1ca884,PodSandboxId:df1f9b098b96efd95634ec1ac89c1464eb019ebf0d7d193a94c9bfb3ec64171c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701981507786190589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07079667924c1ad44e38f3b70bca49240c1fd6d0ae6338e946fe995713040ace,PodSandboxId:de025ce1255abe9f20b7c668ebaf5142df1089d26798bbc7ff0e334305177b9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701981494684052899,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006dd2ca7aa03e900443ce7d00fce61a0660701c11139ee1b711c44857c02dc3,PodSandboxId:7ec975c2a24bd8bc50b7c94f2bba0bf5d237aec95350c284c49a8dddf6e46e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701981492537538528,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f4a64be23c4d97f11c819cef00876275801554fbc32e8997b22bd6cdc6c1f7,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701981492514792200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5
d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36c279f8e416a3a4aa51f05ee2c235fd302b162ae64e33fa8f9eafe81efc6bc,PodSandboxId:601be80f00d7074c353bdbe54727a828416e4b1c6398634a37ac56c9f7cdf0a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701981486982125193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c927eda8b55affd773b6add0d06db38b55d32205eb7b82130ccefeb809c8c7f,PodSandboxId:0698bded6de3ac73488735cc88efca9376e090a8c9fb6042c311dbe0450b1562,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701981486883412403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.container.hash
: 9af16bb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d36ac825c0eac9306f365adf49ac4e5d441cce6c96c6dc2758d8f43fc27d00,PodSandboxId:40580bbd7150c50bdc913eae570b1a6c312a2350d1764e6c89d37c6747b7aa53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701981486495505031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 251fd5a2,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc02b2a29f14c532141bf59f0403267af23dba558c83406d89ea96944dafb71,PodSandboxId:c4741cf8634891655c73e74b26531be9b7a180d0dafb2fa9f7d1d51783b09e09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701981486467737901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa296651-7549-495e-8db6-38b6d5b50a63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.425765121Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f73d725e-af3b-4b44-bf29-28f5f00a0a6b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.426083276Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a2cb571b3359dcd001b4575f44c48b14679e1a0c8e05b76490a3de812f4325b1,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-jbm9q,Uid:c38ee0c6-472e-4db5-bb15-c1f1ce390207,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981507353847440,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T20:38:11.374230277Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df1f9b098b96efd95634ec1ac89c1464eb019ebf0d7d193a94c9bfb3ec64171c,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-7mss7,Uid:6d6632ea-9aae-43e7-8b17-56399870082b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1701981507152042798,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T20:38:11.374220185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de025ce1255abe9f20b7c668ebaf5142df1089d26798bbc7ff0e334305177b9a,Metadata:&PodSandboxMetadata{Name:kindnet-jpfqs,Uid:158552a2-294c-4d08-81de-05b1daf7dfe1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981491757493241,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 158552a2-294c-4d08-81de-05b1daf7dfe1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2023-12-07T20:38:11.374231359Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ec975c2a24bd8bc50b7c94f2bba0bf5d237aec95350c284c49a8dddf6e46e73,Metadata:&PodSandboxMetadata{Name:kube-proxy-pfc45,Uid:1e39fc15-3b2e-418c-92f1-32570e3bd853,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981491743018310,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3bd853,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T20:38:11.374233290Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:48bcf9dc-632d-4f04-9f6a-04d31cef5d88,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1701981491704712624,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tm
p\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-07T20:38:11.374228628Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:601be80f00d7074c353bdbe54727a828416e4b1c6398634a37ac56c9f7cdf0a5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-660958,Uid:36460e92ca68c41cc5386b5bee9ca633,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981485920643179,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 36460e92ca68c41cc5386b5bee9ca633,kubernetes.io/config.seen: 2023-12-07T20:38:05.366106870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40580bbd7150c50bdc913eae570b1a6c312a2350d1764e6c89d37c6747b7aa53,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mult
inode-660958,Uid:3be2f0b39689e91f9171b575c679c7c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981485907165219,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.19:8443,kubernetes.io/config.hash: 3be2f0b39689e91f9171b575c679c7c3,kubernetes.io/config.seen: 2023-12-07T20:38:05.366104959Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0698bded6de3ac73488735cc88efca9376e090a8c9fb6042c311dbe0450b1562,Metadata:&PodSandboxMetadata{Name:etcd-multinode-660958,Uid:8b7abfcd2f221a7da3eb913c0d8d4a01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981485903182115,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: 8b7abfcd2f221a7da3eb913c0d8d4a01,kubernetes.io/config.seen: 2023-12-07T20:38:05.366101214Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4741cf8634891655c73e74b26531be9b7a180d0dafb2fa9f7d1d51783b09e09,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-660958,Uid:252eef32247c5aa4e495d2fdf0fe1947,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701981485894014610,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 252eef32247c5aa4e495d2fdf0fe1947,kubernetes.io/config.seen: 2023-12-07T20:38:05.366106011Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=f73d725e-af3b-4b44-bf29-28f5f00a0a6b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.426814569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8042ce11-614d-4239-a6ad-751f8a6d8364 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.426893018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8042ce11-614d-4239-a6ad-751f8a6d8364 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.427165764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2db32c9fb9333fbc8a531d270ca6af64b1b0fd87b72475e07f3c2e990b2cfb,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701981523612889254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6158bce6682fcede5fea0e3d4223b6f51b66916e90843ab9b06ec653f63932e2,PodSandboxId:a2cb571b3359dcd001b4575f44c48b14679e1a0c8e05b76490a3de812f4325b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701981510872345718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4027f020f41ba3d7d801c905da4e5ec37417950b4aa14cf794e4265ddd1ca884,PodSandboxId:df1f9b098b96efd95634ec1ac89c1464eb019ebf0d7d193a94c9bfb3ec64171c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701981507786190589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07079667924c1ad44e38f3b70bca49240c1fd6d0ae6338e946fe995713040ace,PodSandboxId:de025ce1255abe9f20b7c668ebaf5142df1089d26798bbc7ff0e334305177b9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701981494684052899,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006dd2ca7aa03e900443ce7d00fce61a0660701c11139ee1b711c44857c02dc3,PodSandboxId:7ec975c2a24bd8bc50b7c94f2bba0bf5d237aec95350c284c49a8dddf6e46e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701981492537538528,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36c279f8e416a3a4aa51f05ee2c235fd302b162ae64e33fa8f9eafe81efc6bc,PodSandboxId:601be80f00d7074c353bdbe54727a828416e4b1c6398634a37ac56c9f7cdf0a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701981486982125193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Anno
tations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c927eda8b55affd773b6add0d06db38b55d32205eb7b82130ccefeb809c8c7f,PodSandboxId:0698bded6de3ac73488735cc88efca9376e090a8c9fb6042c311dbe0450b1562,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701981486883412403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 9af16bb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d36ac825c0eac9306f365adf49ac4e5d441cce6c96c6dc2758d8f43fc27d00,PodSandboxId:40580bbd7150c50bdc913eae570b1a6c312a2350d1764e6c89d37c6747b7aa53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701981486495505031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 251fd5a2
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc02b2a29f14c532141bf59f0403267af23dba558c83406d89ea96944dafb71,PodSandboxId:c4741cf8634891655c73e74b26531be9b7a180d0dafb2fa9f7d1d51783b09e09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701981486467737901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[string]string{io.kubernetes.
container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8042ce11-614d-4239-a6ad-751f8a6d8364 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.438742787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7e324aa4-cc9b-46d9-a872-4406547821b5 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.438819767Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7e324aa4-cc9b-46d9-a872-4406547821b5 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.441259947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd0219e3-453b-4be7-942e-a9267a96cc83 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.441641570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701981715441630026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cd0219e3-453b-4be7-942e-a9267a96cc83 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.442435870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bcbb7c5c-4f4a-4e8a-ad83-11e1bdf39584 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.442505740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bcbb7c5c-4f4a-4e8a-ad83-11e1bdf39584 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:41:55 multinode-660958 crio[710]: time="2023-12-07 20:41:55.442713951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2db32c9fb9333fbc8a531d270ca6af64b1b0fd87b72475e07f3c2e990b2cfb,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701981523612889254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6158bce6682fcede5fea0e3d4223b6f51b66916e90843ab9b06ec653f63932e2,PodSandboxId:a2cb571b3359dcd001b4575f44c48b14679e1a0c8e05b76490a3de812f4325b1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701981510872345718,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-jbm9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c38ee0c6-472e-4db5-bb15-c1f1ce390207,},Annotations:map[string]string{io.kubernetes.container.hash: 461edd0,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4027f020f41ba3d7d801c905da4e5ec37417950b4aa14cf794e4265ddd1ca884,PodSandboxId:df1f9b098b96efd95634ec1ac89c1464eb019ebf0d7d193a94c9bfb3ec64171c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701981507786190589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7mss7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6632ea-9aae-43e7-8b17-56399870082b,},Annotations:map[string]string{io.kubernetes.container.hash: e555cfc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07079667924c1ad44e38f3b70bca49240c1fd6d0ae6338e946fe995713040ace,PodSandboxId:de025ce1255abe9f20b7c668ebaf5142df1089d26798bbc7ff0e334305177b9a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701981494684052899,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpfqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 158552a2-294c-4d08-81de-05b1daf7dfe1,},Annotations:map[string]string{io.kubernetes.container.hash: 2587529f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006dd2ca7aa03e900443ce7d00fce61a0660701c11139ee1b711c44857c02dc3,PodSandboxId:7ec975c2a24bd8bc50b7c94f2bba0bf5d237aec95350c284c49a8dddf6e46e73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701981492537538528,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pfc45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e39fc15-3b2e-418c-92f1-32570e3
bd853,},Annotations:map[string]string{io.kubernetes.container.hash: c931b25d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f4a64be23c4d97f11c819cef00876275801554fbc32e8997b22bd6cdc6c1f7,PodSandboxId:60e77c7459f90f7db77824217c44129ee987e6b4a86e0ac6ec4ca8268b8f5003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701981492514792200,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48bcf9dc-632d-4f04-9f6a-04d31cef5
d88,},Annotations:map[string]string{io.kubernetes.container.hash: d29d4471,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36c279f8e416a3a4aa51f05ee2c235fd302b162ae64e33fa8f9eafe81efc6bc,PodSandboxId:601be80f00d7074c353bdbe54727a828416e4b1c6398634a37ac56c9f7cdf0a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701981486982125193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36460e92ca68c41cc5386b5bee9ca633,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c927eda8b55affd773b6add0d06db38b55d32205eb7b82130ccefeb809c8c7f,PodSandboxId:0698bded6de3ac73488735cc88efca9376e090a8c9fb6042c311dbe0450b1562,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701981486883412403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7abfcd2f221a7da3eb913c0d8d4a01,},Annotations:map[string]string{io.kubernetes.container.hash
: 9af16bb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d36ac825c0eac9306f365adf49ac4e5d441cce6c96c6dc2758d8f43fc27d00,PodSandboxId:40580bbd7150c50bdc913eae570b1a6c312a2350d1764e6c89d37c6747b7aa53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701981486495505031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be2f0b39689e91f9171b575c679c7c3,},Annotations:map[string]string{io.kubernetes.container.hash: 251fd5a2,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc02b2a29f14c532141bf59f0403267af23dba558c83406d89ea96944dafb71,PodSandboxId:c4741cf8634891655c73e74b26531be9b7a180d0dafb2fa9f7d1d51783b09e09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701981486467737901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-660958,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 252eef32247c5aa4e495d2fdf0fe1947,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bcbb7c5c-4f4a-4e8a-ad83-11e1bdf39584 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd2db32c9fb93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   60e77c7459f90       storage-provisioner
	6158bce6682fc       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   a2cb571b3359d       busybox-5bc68d56bd-jbm9q
	4027f020f41ba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   df1f9b098b96e       coredns-5dd5756b68-7mss7
	07079667924c1       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   de025ce1255ab       kindnet-jpfqs
	006dd2ca7aa03       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   7ec975c2a24bd       kube-proxy-pfc45
	d3f4a64be23c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   60e77c7459f90       storage-provisioner
	d36c279f8e416       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   601be80f00d70       kube-scheduler-multinode-660958
	5c927eda8b55a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   0698bded6de3a       etcd-multinode-660958
	91d36ac825c0e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   40580bbd7150c       kube-apiserver-multinode-660958
	9bc02b2a29f14       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   c4741cf863489       kube-controller-manager-multinode-660958
	
	* 
	* ==> coredns [4027f020f41ba3d7d801c905da4e5ec37417950b4aa14cf794e4265ddd1ca884] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40029 - 42873 "HINFO IN 3297199820217208397.4239773972359875374. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009878485s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-660958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-660958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=multinode-660958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_27_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:27:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-660958
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:41:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:38:42 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:38:42 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:38:42 +0000   Thu, 07 Dec 2023 20:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:38:42 +0000   Thu, 07 Dec 2023 20:38:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    multinode-660958
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 27972dc3ec4347f3b362f8548c92a179
	  System UUID:                27972dc3-ec43-47f3-b362-f8548c92a179
	  Boot ID:                    611c9827-3d7d-4ba9-a45d-ec1a2818dd3d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-jbm9q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-7mss7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-660958                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-jpfqs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-660958             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-660958    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pfc45                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-660958             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-660958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-660958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-660958 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-660958 event: Registered Node multinode-660958 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-660958 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-660958 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-660958 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-660958 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-660958 event: Registered Node multinode-660958 in Controller
	
	
	Name:               multinode-660958-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-660958-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=multinode-660958
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_07T20_41_51_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:40:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-660958-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:40:10 +0000   Thu, 07 Dec 2023 20:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:40:10 +0000   Thu, 07 Dec 2023 20:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:40:10 +0000   Thu, 07 Dec 2023 20:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:40:10 +0000   Thu, 07 Dec 2023 20:40:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-660958-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3106d12287b94e3b946de42da9b1f4d9
	  System UUID:                3106d122-87b9-4e3b-946d-e42da9b1f4d9
	  Boot ID:                    33e5ed41-b61b-4407-8aeb-48da19301e90
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zbc8r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-d764j               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-rxqfp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-660958-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-660958-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-660958-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-660958-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                  kubelet     Node multinode-660958-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m21s (x2 over 3m21s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-660958-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-660958-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-660958-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet     Node multinode-660958-m02 status is now: NodeReady
	
	
	Name:               multinode-660958-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-660958-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=multinode-660958
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_07T20_41_51_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-660958-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:41:50 +0000   Thu, 07 Dec 2023 20:41:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:41:50 +0000   Thu, 07 Dec 2023 20:41:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:41:50 +0000   Thu, 07 Dec 2023 20:41:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:41:50 +0000   Thu, 07 Dec 2023 20:41:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    multinode-660958-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e04f3aec0eb148fea97b3b73cd463bfe
	  System UUID:                e04f3aec-0eb1-48fe-a97b-3b73cd463bfe
	  Boot ID:                    1a68262a-d37b-4d03-b40d-705c3f4765c3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pzzgm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-6flr5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-mjptg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-660958-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-660958-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-660958-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-660958-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                 kubelet     Node multinode-660958-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        41s (x2 over 101s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       9s                  kubelet     Node multinode-660958-m03 status is now: NodeNotSchedulable
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-660958-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-660958-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-660958-m03 status is now: NodeReady
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067258] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.364349] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527822] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139439] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.468491] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.602752] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.107735] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.153965] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.113621] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.218567] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Dec 7 20:38] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +18.820428] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [5c927eda8b55affd773b6add0d06db38b55d32205eb7b82130ccefeb809c8c7f] <==
	* {"level":"info","ts":"2023-12-07T20:38:08.574988Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:38:08.575024Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:38:08.577027Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-07T20:38:08.577266Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"683e1d26ac7e3123","initial-advertise-peer-urls":["https://192.168.39.19:2380"],"listen-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T20:38:08.577323Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T20:38:08.577406Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-12-07T20:38:08.577429Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-12-07T20:38:08.577618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035)"}
	{"level":"info","ts":"2023-12-07T20:38:08.579259Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","added-peer-id":"683e1d26ac7e3123","added-peer-peer-urls":["https://192.168.39.19:2380"]}
	{"level":"info","ts":"2023-12-07T20:38:08.580614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:38:08.581018Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:38:09.858278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-07T20:38:09.858323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:38:09.858362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 2"}
	{"level":"info","ts":"2023-12-07T20:38:09.858382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T20:38:09.858388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-12-07T20:38:09.858396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 3"}
	{"level":"info","ts":"2023-12-07T20:38:09.858402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-12-07T20:38:09.861408Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:38:09.86135Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:multinode-660958 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:38:09.862176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:38:09.862555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	{"level":"info","ts":"2023-12-07T20:38:09.862997Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:38:09.863035Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:38:09.863495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:41:55 up 4 min,  0 users,  load average: 0.50, 0.33, 0.14
	Linux multinode-660958 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [07079667924c1ad44e38f3b70bca49240c1fd6d0ae6338e946fe995713040ace] <==
	* I1207 20:41:06.547108       1 main.go:250] Node multinode-660958-m03 has CIDR [10.244.3.0/24] 
	I1207 20:41:16.552242       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:41:16.552393       1 main.go:227] handling current node
	I1207 20:41:16.552426       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:41:16.552449       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	I1207 20:41:16.552589       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I1207 20:41:16.552614       1 main.go:250] Node multinode-660958-m03 has CIDR [10.244.3.0/24] 
	I1207 20:41:26.558071       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:41:26.558116       1 main.go:227] handling current node
	I1207 20:41:26.558127       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:41:26.558133       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	I1207 20:41:26.558230       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I1207 20:41:26.558235       1 main.go:250] Node multinode-660958-m03 has CIDR [10.244.3.0/24] 
	I1207 20:41:36.566831       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:41:36.566890       1 main.go:227] handling current node
	I1207 20:41:36.566902       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:41:36.566909       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	I1207 20:41:36.567132       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I1207 20:41:36.567139       1 main.go:250] Node multinode-660958-m03 has CIDR [10.244.3.0/24] 
	I1207 20:41:46.584684       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I1207 20:41:46.584776       1 main.go:227] handling current node
	I1207 20:41:46.584806       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I1207 20:41:46.584829       1 main.go:250] Node multinode-660958-m02 has CIDR [10.244.1.0/24] 
	I1207 20:41:46.585099       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I1207 20:41:46.585147       1 main.go:250] Node multinode-660958-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [91d36ac825c0eac9306f365adf49ac4e5d441cce6c96c6dc2758d8f43fc27d00] <==
	* I1207 20:38:11.159783       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1207 20:38:11.159811       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1207 20:38:11.159841       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1207 20:38:11.159861       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1207 20:38:11.309723       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:38:11.347020       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1207 20:38:11.349456       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:38:11.349560       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1207 20:38:11.349587       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 20:38:11.349472       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1207 20:38:11.351304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:38:11.364073       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 20:38:11.364236       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 20:38:11.364287       1 aggregator.go:166] initial CRD sync complete...
	I1207 20:38:11.364310       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 20:38:11.364332       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 20:38:11.364355       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:38:12.160203       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 20:38:13.934804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 20:38:14.089111       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 20:38:14.099634       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 20:38:14.166740       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:38:14.173164       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 20:38:23.819574       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 20:38:23.867533       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [9bc02b2a29f14c532141bf59f0403267af23dba558c83406d89ea96944dafb71] <==
	* I1207 20:40:09.933641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m03"
	I1207 20:40:09.935182       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-vllfc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-vllfc"
	I1207 20:40:09.954482       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-660958-m02" podCIDRs=["10.244.1.0/24"]
	I1207 20:40:10.073992       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m02"
	I1207 20:40:10.572544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.504758ms"
	I1207 20:40:10.572691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.508µs"
	I1207 20:40:10.837858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.72µs"
	I1207 20:40:24.116541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.998µs"
	I1207 20:40:24.710247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.675µs"
	I1207 20:40:24.713170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="315.289µs"
	I1207 20:40:45.764385       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m02"
	I1207 20:41:46.788813       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zbc8r"
	I1207 20:41:46.799273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.573602ms"
	I1207 20:41:46.816027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.638745ms"
	I1207 20:41:46.816334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="157.056µs"
	I1207 20:41:46.825464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="27.956µs"
	I1207 20:41:47.976140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.62583ms"
	I1207 20:41:47.976811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="263.352µs"
	I1207 20:41:49.793797       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m02"
	I1207 20:41:50.432646       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m02"
	I1207 20:41:50.433367       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-660958-m03\" does not exist"
	I1207 20:41:50.433596       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-pzzgm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-pzzgm"
	I1207 20:41:50.454283       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-660958-m03" podCIDRs=["10.244.2.0/24"]
	I1207 20:41:50.591535       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-660958-m03"
	I1207 20:41:51.328607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.333µs"
	
	* 
	* ==> kube-proxy [006dd2ca7aa03e900443ce7d00fce61a0660701c11139ee1b711c44857c02dc3] <==
	* I1207 20:38:12.787200       1 server_others.go:69] "Using iptables proxy"
	I1207 20:38:12.801402       1 node.go:141] Successfully retrieved node IP: 192.168.39.19
	I1207 20:38:12.873366       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:38:12.873423       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:38:12.876973       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:38:12.877133       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:38:12.878107       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:38:12.878333       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:38:12.881882       1 config.go:188] "Starting service config controller"
	I1207 20:38:12.882472       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:38:12.882652       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:38:12.882689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:38:12.883628       1 config.go:315] "Starting node config controller"
	I1207 20:38:12.883769       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:38:12.983089       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:38:12.983163       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:38:12.985136       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d36c279f8e416a3a4aa51f05ee2c235fd302b162ae64e33fa8f9eafe81efc6bc] <==
	* I1207 20:38:08.786339       1 serving.go:348] Generated self-signed cert in-memory
	W1207 20:38:11.259140       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 20:38:11.259344       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:38:11.259372       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 20:38:11.259455       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 20:38:11.296104       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1207 20:38:11.296148       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:38:11.297856       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1207 20:38:11.303044       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 20:38:11.303092       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:38:11.303112       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1207 20:38:11.314699       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:38:11.314753       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:38:11.316212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E1207 20:38:11.316263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	I1207 20:38:12.704237       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:37:39 UTC, ends at Thu 2023-12-07 20:41:56 UTC. --
	Dec 07 20:38:15 multinode-660958 kubelet[918]: E1207 20:38:15.111237     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c38ee0c6-472e-4db5-bb15-c1f1ce390207-kube-api-access-bq5rf podName:c38ee0c6-472e-4db5-bb15-c1f1ce390207 nodeName:}" failed. No retries permitted until 2023-12-07 20:38:19.111213657 +0000 UTC m=+13.978732639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bq5rf" (UniqueName: "kubernetes.io/projected/c38ee0c6-472e-4db5-bb15-c1f1ce390207-kube-api-access-bq5rf") pod "busybox-5bc68d56bd-jbm9q" (UID: "c38ee0c6-472e-4db5-bb15-c1f1ce390207") : object "default"/"kube-root-ca.crt" not registered
	Dec 07 20:38:15 multinode-660958 kubelet[918]: E1207 20:38:15.418629     918 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 07 20:38:15 multinode-660958 kubelet[918]: E1207 20:38:15.426732     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-7mss7" podUID="6d6632ea-9aae-43e7-8b17-56399870082b"
	Dec 07 20:38:16 multinode-660958 kubelet[918]: E1207 20:38:16.425344     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-jbm9q" podUID="c38ee0c6-472e-4db5-bb15-c1f1ce390207"
	Dec 07 20:38:17 multinode-660958 kubelet[918]: E1207 20:38:17.425266     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-7mss7" podUID="6d6632ea-9aae-43e7-8b17-56399870082b"
	Dec 07 20:38:18 multinode-660958 kubelet[918]: E1207 20:38:18.425137     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-jbm9q" podUID="c38ee0c6-472e-4db5-bb15-c1f1ce390207"
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.041223     918 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.041388     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d6632ea-9aae-43e7-8b17-56399870082b-config-volume podName:6d6632ea-9aae-43e7-8b17-56399870082b nodeName:}" failed. No retries permitted until 2023-12-07 20:38:27.041371613 +0000 UTC m=+21.908890571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6d6632ea-9aae-43e7-8b17-56399870082b-config-volume") pod "coredns-5dd5756b68-7mss7" (UID: "6d6632ea-9aae-43e7-8b17-56399870082b") : object "kube-system"/"coredns" not registered
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.142065     918 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.142100     918 projected.go:198] Error preparing data for projected volume kube-api-access-bq5rf for pod default/busybox-5bc68d56bd-jbm9q: object "default"/"kube-root-ca.crt" not registered
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.142199     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c38ee0c6-472e-4db5-bb15-c1f1ce390207-kube-api-access-bq5rf podName:c38ee0c6-472e-4db5-bb15-c1f1ce390207 nodeName:}" failed. No retries permitted until 2023-12-07 20:38:27.142181679 +0000 UTC m=+22.009700636 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bq5rf" (UniqueName: "kubernetes.io/projected/c38ee0c6-472e-4db5-bb15-c1f1ce390207-kube-api-access-bq5rf") pod "busybox-5bc68d56bd-jbm9q" (UID: "c38ee0c6-472e-4db5-bb15-c1f1ce390207") : object "default"/"kube-root-ca.crt" not registered
	Dec 07 20:38:19 multinode-660958 kubelet[918]: E1207 20:38:19.425108     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-7mss7" podUID="6d6632ea-9aae-43e7-8b17-56399870082b"
	Dec 07 20:38:43 multinode-660958 kubelet[918]: I1207 20:38:43.591900     918 scope.go:117] "RemoveContainer" containerID="d3f4a64be23c4d97f11c819cef00876275801554fbc32e8997b22bd6cdc6c1f7"
	Dec 07 20:39:05 multinode-660958 kubelet[918]: E1207 20:39:05.554380     918 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 20:39:05 multinode-660958 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 20:39:05 multinode-660958 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 20:39:05 multinode-660958 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 20:40:05 multinode-660958 kubelet[918]: E1207 20:40:05.549849     918 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 20:40:05 multinode-660958 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 20:40:05 multinode-660958 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 20:40:05 multinode-660958 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 20:41:05 multinode-660958 kubelet[918]: E1207 20:41:05.550450     918 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 20:41:05 multinode-660958 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 20:41:05 multinode-660958 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 20:41:05 multinode-660958 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-660958 -n multinode-660958
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-660958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (688.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-660958 stop: exit status 82 (2m1.327942701s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-660958"  ...
	* Stopping node "multinode-660958"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-660958 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-660958 status: exit status 3 (18.684174419s)

                                                
                                                
-- stdout --
	multinode-660958
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-660958-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 20:44:18.450269   36018 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E1207 20:44:18.450316   36018 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-660958 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-660958 -n multinode-660958
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-660958 -n multinode-660958: exit status 3 (3.169097049s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 20:44:21.778300   36129 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E1207 20:44:21.778320   36129 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-660958" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.18s)

                                                
                                    
x
+
TestPreload (253.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-867544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1207 20:54:08.984960   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:54:28.943788   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-867544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m37.559605936s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-867544 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-867544 image pull gcr.io/k8s-minikube/busybox: (2.797311383s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-867544
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-867544: (7.099704048s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-867544 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1207 20:56:05.939954   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:56:41.699551   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-867544 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.203819739s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-867544 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-12-07 20:56:53.428520371 +0000 UTC m=+3337.596665348
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-867544 -n test-preload-867544
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-867544 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-867544 logs -n 25: (1.110067566s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958 sudo cat                                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m03_multinode-660958.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt                       | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m02:/home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n                                                                 | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | multinode-660958-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-660958 ssh -n multinode-660958-m02 sudo cat                                   | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	|         | /home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-660958 node stop m03                                                          | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:29 UTC |
	| node    | multinode-660958 node start                                                             | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:29 UTC | 07 Dec 23 20:30 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-660958                                                                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:30 UTC |                     |
	| stop    | -p multinode-660958                                                                     | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:30 UTC |                     |
	| start   | -p multinode-660958                                                                     | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:32 UTC | 07 Dec 23 20:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-660958                                                                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:41 UTC |                     |
	| node    | multinode-660958 node delete                                                            | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:41 UTC | 07 Dec 23 20:41 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-660958 stop                                                                   | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:41 UTC |                     |
	| start   | -p multinode-660958                                                                     | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:44 UTC | 07 Dec 23 20:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-660958                                                                | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:51 UTC |                     |
	| start   | -p multinode-660958-m02                                                                 | multinode-660958-m02 | jenkins | v1.32.0 | 07 Dec 23 20:51 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-660958-m03                                                                 | multinode-660958-m03 | jenkins | v1.32.0 | 07 Dec 23 20:51 UTC | 07 Dec 23 20:52 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-660958                                                                 | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:52 UTC |                     |
	| delete  | -p multinode-660958-m03                                                                 | multinode-660958-m03 | jenkins | v1.32.0 | 07 Dec 23 20:52 UTC | 07 Dec 23 20:52 UTC |
	| delete  | -p multinode-660958                                                                     | multinode-660958     | jenkins | v1.32.0 | 07 Dec 23 20:52 UTC | 07 Dec 23 20:52 UTC |
	| start   | -p test-preload-867544                                                                  | test-preload-867544  | jenkins | v1.32.0 | 07 Dec 23 20:52 UTC | 07 Dec 23 20:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-867544 image pull                                                          | test-preload-867544  | jenkins | v1.32.0 | 07 Dec 23 20:55 UTC | 07 Dec 23 20:55 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-867544                                                                  | test-preload-867544  | jenkins | v1.32.0 | 07 Dec 23 20:55 UTC | 07 Dec 23 20:55 UTC |
	| start   | -p test-preload-867544                                                                  | test-preload-867544  | jenkins | v1.32.0 | 07 Dec 23 20:55 UTC | 07 Dec 23 20:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-867544 image list                                                          | test-preload-867544  | jenkins | v1.32.0 | 07 Dec 23 20:56 UTC | 07 Dec 23 20:56 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:55:30.043471   38924 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:55:30.043743   38924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:55:30.043752   38924 out.go:309] Setting ErrFile to fd 2...
	I1207 20:55:30.043756   38924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:55:30.043952   38924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:55:30.044479   38924 out.go:303] Setting JSON to false
	I1207 20:55:30.045310   38924 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5876,"bootTime":1701976654,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:55:30.045366   38924 start.go:138] virtualization: kvm guest
	I1207 20:55:30.047845   38924 out.go:177] * [test-preload-867544] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:55:30.049397   38924 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:55:30.049444   38924 notify.go:220] Checking for updates...
	I1207 20:55:30.050868   38924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:55:30.052275   38924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:55:30.053702   38924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:55:30.054835   38924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:55:30.056004   38924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:55:30.057599   38924 config.go:182] Loaded profile config "test-preload-867544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1207 20:55:30.058021   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:55:30.058063   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:55:30.071987   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I1207 20:55:30.072393   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:55:30.072850   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:55:30.072898   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:55:30.073229   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:55:30.073402   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:55:30.075209   38924 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1207 20:55:30.076519   38924 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:55:30.076795   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:55:30.076835   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:55:30.090918   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1207 20:55:30.091295   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:55:30.091775   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:55:30.091796   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:55:30.092139   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:55:30.092294   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:55:30.126724   38924 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 20:55:30.128107   38924 start.go:298] selected driver: kvm2
	I1207 20:55:30.128116   38924 start.go:902] validating driver "kvm2" against &{Name:test-preload-867544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-867544 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:55:30.128219   38924 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:55:30.128862   38924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:55:30.128949   38924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:55:30.143092   38924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:55:30.143409   38924 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:55:30.143468   38924 cni.go:84] Creating CNI manager for ""
	I1207 20:55:30.143482   38924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:55:30.143494   38924 start_flags.go:323] config:
	{Name:test-preload-867544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-867544 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:55:30.143653   38924 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:55:30.145292   38924 out.go:177] * Starting control plane node test-preload-867544 in cluster test-preload-867544
	I1207 20:55:30.146439   38924 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1207 20:55:30.607546   38924 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:55:30.607589   38924 cache.go:56] Caching tarball of preloaded images
	I1207 20:55:30.607744   38924 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1207 20:55:30.609432   38924 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1207 20:55:30.610684   38924 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:55:30.727114   38924 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:55:44.028197   38924 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:55:44.028288   38924 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:55:44.922424   38924 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1207 20:55:44.922551   38924 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/config.json ...
	I1207 20:55:44.922778   38924 start.go:365] acquiring machines lock for test-preload-867544: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 20:55:44.922839   38924 start.go:369] acquired machines lock for "test-preload-867544" in 41.222µs
	I1207 20:55:44.922853   38924 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:55:44.922858   38924 fix.go:54] fixHost starting: 
	I1207 20:55:44.923113   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:55:44.923144   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:55:44.936855   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I1207 20:55:44.937283   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:55:44.937773   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:55:44.937787   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:55:44.938147   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:55:44.938340   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:55:44.938506   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetState
	I1207 20:55:44.940229   38924 fix.go:102] recreateIfNeeded on test-preload-867544: state=Stopped err=<nil>
	I1207 20:55:44.940258   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	W1207 20:55:44.940409   38924 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:55:44.942800   38924 out.go:177] * Restarting existing kvm2 VM for "test-preload-867544" ...
	I1207 20:55:44.944188   38924 main.go:141] libmachine: (test-preload-867544) Calling .Start
	I1207 20:55:44.944318   38924 main.go:141] libmachine: (test-preload-867544) Ensuring networks are active...
	I1207 20:55:44.944999   38924 main.go:141] libmachine: (test-preload-867544) Ensuring network default is active
	I1207 20:55:44.945368   38924 main.go:141] libmachine: (test-preload-867544) Ensuring network mk-test-preload-867544 is active
	I1207 20:55:44.945793   38924 main.go:141] libmachine: (test-preload-867544) Getting domain xml...
	I1207 20:55:44.946435   38924 main.go:141] libmachine: (test-preload-867544) Creating domain...
	I1207 20:55:46.147001   38924 main.go:141] libmachine: (test-preload-867544) Waiting to get IP...
	I1207 20:55:46.147969   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:46.148337   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:46.148410   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:46.148340   38992 retry.go:31] will retry after 230.439247ms: waiting for machine to come up
	I1207 20:55:46.380720   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:46.381352   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:46.381378   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:46.381298   38992 retry.go:31] will retry after 294.442417ms: waiting for machine to come up
	I1207 20:55:46.677894   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:46.678410   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:46.678440   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:46.678372   38992 retry.go:31] will retry after 370.795865ms: waiting for machine to come up
	I1207 20:55:47.050779   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:47.051202   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:47.051233   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:47.051148   38992 retry.go:31] will retry after 378.267258ms: waiting for machine to come up
	I1207 20:55:47.430795   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:47.431193   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:47.431222   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:47.431153   38992 retry.go:31] will retry after 734.744292ms: waiting for machine to come up
	I1207 20:55:48.166957   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:48.167425   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:48.167456   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:48.167327   38992 retry.go:31] will retry after 921.097122ms: waiting for machine to come up
	I1207 20:55:49.089628   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:49.090022   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:49.090053   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:49.089969   38992 retry.go:31] will retry after 1.184902276s: waiting for machine to come up
	I1207 20:55:50.276641   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:50.277012   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:50.277042   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:50.276959   38992 retry.go:31] will retry after 1.475650599s: waiting for machine to come up
	I1207 20:55:51.754502   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:51.754925   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:51.754955   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:51.754872   38992 retry.go:31] will retry after 1.231164317s: waiting for machine to come up
	I1207 20:55:52.988170   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:52.988620   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:52.988651   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:52.988509   38992 retry.go:31] will retry after 1.42426712s: waiting for machine to come up
	I1207 20:55:54.415132   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:54.415608   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:54.415629   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:54.415554   38992 retry.go:31] will retry after 2.876619473s: waiting for machine to come up
	I1207 20:55:57.295159   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:57.295613   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:57.295635   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:57.295576   38992 retry.go:31] will retry after 2.277327809s: waiting for machine to come up
	I1207 20:55:59.575919   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:55:59.576215   38924 main.go:141] libmachine: (test-preload-867544) DBG | unable to find current IP address of domain test-preload-867544 in network mk-test-preload-867544
	I1207 20:55:59.576232   38924 main.go:141] libmachine: (test-preload-867544) DBG | I1207 20:55:59.576184   38992 retry.go:31] will retry after 3.912902177s: waiting for machine to come up
	I1207 20:56:03.493437   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.494051   38924 main.go:141] libmachine: (test-preload-867544) Found IP for machine: 192.168.39.150
	I1207 20:56:03.494082   38924 main.go:141] libmachine: (test-preload-867544) Reserving static IP address...
	I1207 20:56:03.494101   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has current primary IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.494419   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "test-preload-867544", mac: "52:54:00:46:4f:1c", ip: "192.168.39.150"} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.494446   38924 main.go:141] libmachine: (test-preload-867544) Reserved static IP address: 192.168.39.150
	I1207 20:56:03.494472   38924 main.go:141] libmachine: (test-preload-867544) DBG | skip adding static IP to network mk-test-preload-867544 - found existing host DHCP lease matching {name: "test-preload-867544", mac: "52:54:00:46:4f:1c", ip: "192.168.39.150"}
	I1207 20:56:03.494485   38924 main.go:141] libmachine: (test-preload-867544) Waiting for SSH to be available...
	I1207 20:56:03.494497   38924 main.go:141] libmachine: (test-preload-867544) DBG | Getting to WaitForSSH function...
	I1207 20:56:03.496457   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.496773   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.496827   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.496919   38924 main.go:141] libmachine: (test-preload-867544) DBG | Using SSH client type: external
	I1207 20:56:03.496948   38924 main.go:141] libmachine: (test-preload-867544) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa (-rw-------)
	I1207 20:56:03.496983   38924 main.go:141] libmachine: (test-preload-867544) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 20:56:03.497000   38924 main.go:141] libmachine: (test-preload-867544) DBG | About to run SSH command:
	I1207 20:56:03.497014   38924 main.go:141] libmachine: (test-preload-867544) DBG | exit 0
	I1207 20:56:03.585549   38924 main.go:141] libmachine: (test-preload-867544) DBG | SSH cmd err, output: <nil>: 
	I1207 20:56:03.585954   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetConfigRaw
	I1207 20:56:03.627208   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetIP
	I1207 20:56:03.629871   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.630218   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.630319   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.630433   38924 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/config.json ...
	I1207 20:56:03.689115   38924 machine.go:88] provisioning docker machine ...
	I1207 20:56:03.689159   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:03.689536   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetMachineName
	I1207 20:56:03.689748   38924 buildroot.go:166] provisioning hostname "test-preload-867544"
	I1207 20:56:03.689769   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetMachineName
	I1207 20:56:03.689936   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:03.692272   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.692608   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.692638   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.692804   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:03.692993   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:03.693150   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:03.693285   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:03.693448   38924 main.go:141] libmachine: Using SSH client type: native
	I1207 20:56:03.693799   38924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1207 20:56:03.693814   38924 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-867544 && echo "test-preload-867544" | sudo tee /etc/hostname
	I1207 20:56:03.828105   38924 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-867544
	
	I1207 20:56:03.828134   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:03.830866   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.831159   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.831189   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.831310   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:03.831460   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:03.831631   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:03.831743   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:03.831929   38924 main.go:141] libmachine: Using SSH client type: native
	I1207 20:56:03.832284   38924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1207 20:56:03.832302   38924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-867544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-867544/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-867544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:56:03.958238   38924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:56:03.958284   38924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 20:56:03.958308   38924 buildroot.go:174] setting up certificates
	I1207 20:56:03.958317   38924 provision.go:83] configureAuth start
	I1207 20:56:03.958329   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetMachineName
	I1207 20:56:03.958617   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetIP
	I1207 20:56:03.961305   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.961593   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.961623   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.961784   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:03.964003   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.964316   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:03.964350   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:03.964442   38924 provision.go:138] copyHostCerts
	I1207 20:56:03.964508   38924 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 20:56:03.964522   38924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 20:56:03.964589   38924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 20:56:03.964717   38924 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 20:56:03.964730   38924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 20:56:03.964768   38924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 20:56:03.964861   38924 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 20:56:03.964869   38924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 20:56:03.964895   38924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 20:56:03.964960   38924 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.test-preload-867544 san=[192.168.39.150 192.168.39.150 localhost 127.0.0.1 minikube test-preload-867544]
	I1207 20:56:04.034014   38924 provision.go:172] copyRemoteCerts
	I1207 20:56:04.034077   38924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:56:04.034099   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.036962   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.037289   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.037316   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.037456   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.037638   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.037825   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.037954   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:04.127540   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 20:56:04.153625   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 20:56:04.178661   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:56:04.203446   38924 provision.go:86] duration metric: configureAuth took 245.115571ms
	I1207 20:56:04.203470   38924 buildroot.go:189] setting minikube options for container-runtime
	I1207 20:56:04.203682   38924 config.go:182] Loaded profile config "test-preload-867544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1207 20:56:04.203760   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.206151   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.206449   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.206487   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.206649   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.206787   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.206961   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.207076   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.207254   38924 main.go:141] libmachine: Using SSH client type: native
	I1207 20:56:04.207556   38924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1207 20:56:04.207585   38924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 20:56:04.542788   38924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 20:56:04.542813   38924 machine.go:91] provisioned docker machine in 853.670403ms
	I1207 20:56:04.542822   38924 start.go:300] post-start starting for "test-preload-867544" (driver="kvm2")
	I1207 20:56:04.542831   38924 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:56:04.542845   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:04.543189   38924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:56:04.543214   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.545719   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.546095   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.546129   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.546282   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.546482   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.546610   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.546806   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:04.635384   38924 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:56:04.639509   38924 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 20:56:04.639535   38924 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 20:56:04.639612   38924 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 20:56:04.639713   38924 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 20:56:04.639836   38924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:56:04.647898   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:56:04.670261   38924 start.go:303] post-start completed in 127.424727ms
	I1207 20:56:04.670286   38924 fix.go:56] fixHost completed within 19.747427481s
	I1207 20:56:04.670305   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.672629   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.673019   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.673046   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.673242   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.673403   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.673550   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.673667   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.673825   38924 main.go:141] libmachine: Using SSH client type: native
	I1207 20:56:04.674170   38924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1207 20:56:04.674183   38924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 20:56:04.790420   38924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701982564.738547302
	
	I1207 20:56:04.790452   38924 fix.go:206] guest clock: 1701982564.738547302
	I1207 20:56:04.790459   38924 fix.go:219] Guest: 2023-12-07 20:56:04.738547302 +0000 UTC Remote: 2023-12-07 20:56:04.670289978 +0000 UTC m=+34.674540403 (delta=68.257324ms)
	I1207 20:56:04.790519   38924 fix.go:190] guest clock delta is within tolerance: 68.257324ms
	I1207 20:56:04.790532   38924 start.go:83] releasing machines lock for "test-preload-867544", held for 19.867679979s
	I1207 20:56:04.790559   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:04.790818   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetIP
	I1207 20:56:04.793415   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.793729   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.793767   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.793895   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:04.794402   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:04.794556   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:04.794646   38924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:56:04.794685   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.794768   38924 ssh_runner.go:195] Run: cat /version.json
	I1207 20:56:04.794788   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:04.797055   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.797343   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.797398   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.797422   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.797548   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.797716   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.797748   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:04.797775   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:04.797855   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.797910   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:04.798043   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:04.798053   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:04.798200   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:04.798315   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:04.903344   38924 ssh_runner.go:195] Run: systemctl --version
	I1207 20:56:04.909067   38924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 20:56:05.051777   38924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 20:56:05.057874   38924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 20:56:05.057963   38924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:56:05.073755   38924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:56:05.073778   38924 start.go:475] detecting cgroup driver to use...
	I1207 20:56:05.073840   38924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 20:56:05.090595   38924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 20:56:05.106161   38924 docker.go:203] disabling cri-docker service (if available) ...
	I1207 20:56:05.106214   38924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 20:56:05.121801   38924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 20:56:05.136800   38924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 20:56:05.246528   38924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 20:56:05.361407   38924 docker.go:219] disabling docker service ...
	I1207 20:56:05.361477   38924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 20:56:05.375778   38924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 20:56:05.387722   38924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 20:56:05.500132   38924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 20:56:05.612192   38924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 20:56:05.625043   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:56:05.642625   38924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1207 20:56:05.642697   38924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:56:05.651774   38924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 20:56:05.651835   38924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:56:05.660877   38924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:56:05.669998   38924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 20:56:05.679018   38924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:56:05.688356   38924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:56:05.696392   38924 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 20:56:05.696452   38924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 20:56:05.709334   38924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:56:05.717672   38924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:56:05.827291   38924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 20:56:05.995608   38924 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 20:56:05.995685   38924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 20:56:06.000714   38924 start.go:543] Will wait 60s for crictl version
	I1207 20:56:06.000764   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:06.004235   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:56:06.039497   38924 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 20:56:06.039634   38924 ssh_runner.go:195] Run: crio --version
	I1207 20:56:06.084555   38924 ssh_runner.go:195] Run: crio --version
	I1207 20:56:06.134041   38924 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1207 20:56:06.135376   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetIP
	I1207 20:56:06.137964   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:06.138274   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:06.138303   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:06.138464   38924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 20:56:06.142293   38924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:56:06.154621   38924 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1207 20:56:06.154677   38924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:56:06.191012   38924 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1207 20:56:06.191086   38924 ssh_runner.go:195] Run: which lz4
	I1207 20:56:06.194696   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:56:06.198754   38924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:56:06.198783   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1207 20:56:07.923567   38924 crio.go:444] Took 1.728930 seconds to copy over tarball
	I1207 20:56:07.923650   38924 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:56:10.717448   38924 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.79376952s)
	I1207 20:56:10.717488   38924 crio.go:451] Took 2.793892 seconds to extract the tarball
	I1207 20:56:10.717499   38924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:56:10.757404   38924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 20:56:10.803588   38924 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1207 20:56:10.803609   38924 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 20:56:10.803662   38924 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:56:10.803689   38924 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 20:56:10.803707   38924 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1207 20:56:10.803724   38924 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1207 20:56:10.803759   38924 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1207 20:56:10.803792   38924 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1207 20:56:10.803818   38924 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1207 20:56:10.803850   38924 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1207 20:56:10.805106   38924 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1207 20:56:10.805135   38924 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1207 20:56:10.805147   38924 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:56:10.805114   38924 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1207 20:56:10.805171   38924 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1207 20:56:10.805188   38924 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1207 20:56:10.805192   38924 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1207 20:56:10.805196   38924 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 20:56:10.984897   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1207 20:56:10.994159   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1207 20:56:11.035808   38924 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1207 20:56:11.035849   38924 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1207 20:56:11.035892   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.049808   38924 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1207 20:56:11.049864   38924 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1207 20:56:11.049904   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.049826   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1207 20:56:11.083847   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1207 20:56:11.083853   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1207 20:56:11.083930   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1207 20:56:11.083986   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1207 20:56:11.084813   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1207 20:56:11.086198   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1207 20:56:11.086941   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 20:56:11.099572   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1207 20:56:11.197477   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1207 20:56:11.197519   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1207 20:56:11.197534   38924 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1207 20:56:11.197587   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1207 20:56:11.197612   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1207 20:56:11.199282   38924 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1207 20:56:11.199316   38924 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1207 20:56:11.199347   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.242913   38924 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1207 20:56:11.242955   38924 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1207 20:56:11.243032   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.244102   38924 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1207 20:56:11.244138   38924 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1207 20:56:11.244193   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.252179   38924 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1207 20:56:11.252213   38924 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 20:56:11.252252   38924 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1207 20:56:11.252279   38924 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1207 20:56:11.252308   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.252256   38924 ssh_runner.go:195] Run: which crictl
	I1207 20:56:11.730074   38924 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:56:13.887786   38924 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.690167916s)
	I1207 20:56:13.887834   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1207 20:56:13.887798   38924 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.690164347s)
	I1207 20:56:13.887847   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1207 20:56:13.887861   38924 ssh_runner.go:235] Completed: which crictl: (2.688493493s)
	I1207 20:56:13.887893   38924 ssh_runner.go:235] Completed: which crictl: (2.644848167s)
	I1207 20:56:13.887929   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1207 20:56:13.887938   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1207 20:56:13.887864   38924 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1207 20:56:13.887978   38924 ssh_runner.go:235] Completed: which crictl: (2.643769748s)
	I1207 20:56:13.888003   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1207 20:56:13.888042   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1207 20:56:13.888103   38924 ssh_runner.go:235] Completed: which crictl: (2.635781739s)
	I1207 20:56:13.888153   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1207 20:56:13.888155   38924 ssh_runner.go:235] Completed: which crictl: (2.63578261s)
	I1207 20:56:13.888213   38924 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.158112465s)
	I1207 20:56:13.888231   38924 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 20:56:14.835706   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1207 20:56:14.835814   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1207 20:56:14.835846   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1207 20:56:14.835956   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1207 20:56:14.835987   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1207 20:56:14.836042   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1207 20:56:14.836052   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1207 20:56:14.836077   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1207 20:56:14.836101   38924 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1207 20:56:14.836129   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1207 20:56:14.836152   38924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1207 20:56:14.841408   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1207 20:56:14.841422   38924 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1207 20:56:14.841452   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1207 20:56:14.850877   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1207 20:56:14.850918   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1207 20:56:14.850941   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1207 20:56:14.851138   38924 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1207 20:56:15.285935   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1207 20:56:15.285978   38924 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1207 20:56:15.286022   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1207 20:56:16.029748   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1207 20:56:16.029790   38924 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1207 20:56:16.029833   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1207 20:56:16.777291   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1207 20:56:16.777343   38924 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1207 20:56:16.777396   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1207 20:56:19.030890   38924 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.253468713s)
	I1207 20:56:19.030929   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1207 20:56:19.030971   38924 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1207 20:56:19.031027   38924 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1207 20:56:19.172493   38924 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1207 20:56:19.172539   38924 cache_images.go:123] Successfully loaded all cached images
	I1207 20:56:19.172543   38924 cache_images.go:92] LoadImages completed in 8.368923826s
	I1207 20:56:19.172608   38924 ssh_runner.go:195] Run: crio config
	I1207 20:56:19.229823   38924 cni.go:84] Creating CNI manager for ""
	I1207 20:56:19.229844   38924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:56:19.229860   38924 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:56:19.229876   38924 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-867544 NodeName:test-preload-867544 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:56:19.230053   38924 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-867544"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:56:19.230134   38924 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-867544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-867544 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:56:19.230193   38924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1207 20:56:19.239174   38924 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:56:19.239249   38924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:56:19.247975   38924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1207 20:56:19.264012   38924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:56:19.280083   38924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1207 20:56:19.297372   38924 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I1207 20:56:19.301121   38924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:56:19.313351   38924 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544 for IP: 192.168.39.150
	I1207 20:56:19.313383   38924 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:56:19.313535   38924 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 20:56:19.313586   38924 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 20:56:19.313683   38924 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.key
	I1207 20:56:19.313760   38924 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/apiserver.key.28911f6b
	I1207 20:56:19.313812   38924 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/proxy-client.key
	I1207 20:56:19.313975   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 20:56:19.314017   38924 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 20:56:19.314034   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 20:56:19.314073   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 20:56:19.314119   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:56:19.314170   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 20:56:19.314238   38924 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 20:56:19.315073   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:56:19.338917   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 20:56:19.362166   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:56:19.386041   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 20:56:19.409264   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:56:19.432398   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 20:56:19.454976   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:56:19.477665   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:56:19.500875   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:56:19.522998   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 20:56:19.544616   38924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 20:56:19.567541   38924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:56:19.583821   38924 ssh_runner.go:195] Run: openssl version
	I1207 20:56:19.589310   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:56:19.599173   38924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:56:19.603908   38924 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:56:19.603974   38924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:56:19.609536   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:56:19.619040   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 20:56:19.628660   38924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 20:56:19.633293   38924 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 20:56:19.633347   38924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 20:56:19.638803   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 20:56:19.648299   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 20:56:19.657946   38924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 20:56:19.662580   38924 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 20:56:19.662638   38924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 20:56:19.668095   38924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:56:19.677862   38924 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:56:19.682414   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 20:56:19.688162   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 20:56:19.694067   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 20:56:19.699749   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 20:56:19.705651   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 20:56:19.711405   38924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 20:56:19.717013   38924 kubeadm.go:404] StartCluster: {Name:test-preload-867544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-867544 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:56:19.717116   38924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 20:56:19.717164   38924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:56:19.755896   38924 cri.go:89] found id: ""
	I1207 20:56:19.755983   38924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:56:19.765717   38924 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 20:56:19.765750   38924 kubeadm.go:636] restartCluster start
	I1207 20:56:19.765811   38924 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 20:56:19.774827   38924 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:19.775252   38924 kubeconfig.go:135] verify returned: extract IP: "test-preload-867544" does not appear in /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:56:19.775369   38924 kubeconfig.go:146] "test-preload-867544" context is missing from /home/jenkins/minikube-integration/17719-9628/kubeconfig - will repair!
	I1207 20:56:19.775675   38924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:56:19.776233   38924 kapi.go:59] client config for test-preload-867544: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:56:19.777123   38924 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 20:56:19.787273   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:19.787342   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:19.799985   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:19.800013   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:19.800070   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:19.811152   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:20.311345   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:20.311423   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:20.322811   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:20.811702   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:20.811773   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:20.823244   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:21.311918   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:21.312016   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:21.323472   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:21.812033   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:21.812141   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:21.823354   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:22.311974   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:22.312046   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:22.323404   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:22.811988   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:22.812092   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:22.823293   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:23.311882   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:23.311955   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:23.323279   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:23.811993   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:23.812077   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:23.823261   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:24.311774   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:24.311842   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:24.323579   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:24.812198   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:24.812266   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:24.823925   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:25.311981   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:25.312061   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:25.323485   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:25.812088   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:25.812169   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:25.823844   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:26.311373   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:26.311453   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:26.323969   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:26.811473   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:26.811576   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:26.824114   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:27.311639   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:27.311726   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:27.323399   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:27.812023   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:27.812115   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:27.825129   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:28.311685   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:28.311782   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:28.323716   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:28.811276   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:28.811354   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:28.822431   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:29.312008   38924 api_server.go:166] Checking apiserver status ...
	I1207 20:56:29.312117   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 20:56:29.323397   38924 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:56:29.788194   38924 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 20:56:29.788241   38924 kubeadm.go:1135] stopping kube-system containers ...
	I1207 20:56:29.788256   38924 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 20:56:29.788305   38924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 20:56:29.835098   38924 cri.go:89] found id: ""
	I1207 20:56:29.835190   38924 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 20:56:29.851285   38924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:56:29.860859   38924 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:56:29.860917   38924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:56:29.870114   38924 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 20:56:29.870140   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:29.973260   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:30.552125   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:30.900671   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:30.979925   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:31.059689   38924 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:56:31.059758   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:31.075461   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:31.591165   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:32.091143   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:32.591063   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:33.091463   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:33.591463   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:33.614769   38924 api_server.go:72] duration metric: took 2.555082401s to wait for apiserver process to appear ...
	I1207 20:56:33.614805   38924 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:56:33.614823   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:38.508313   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:56:38.508346   38924 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:56:38.508357   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:38.545368   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:56:38.545396   38924 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:56:39.046150   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:39.054381   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 20:56:39.054409   38924 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 20:56:39.546021   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:39.561467   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 20:56:39.561500   38924 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 20:56:40.046044   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:40.052897   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1207 20:56:40.065035   38924 api_server.go:141] control plane version: v1.24.4
	I1207 20:56:40.065069   38924 api_server.go:131] duration metric: took 6.450255403s to wait for apiserver health ...
	I1207 20:56:40.065079   38924 cni.go:84] Creating CNI manager for ""
	I1207 20:56:40.065088   38924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:56:40.066714   38924 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 20:56:40.067998   38924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 20:56:40.079492   38924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 20:56:40.103564   38924 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:56:40.113593   38924 system_pods.go:59] 8 kube-system pods found
	I1207 20:56:40.113623   38924 system_pods.go:61] "coredns-6d4b75cb6d-5fv5k" [46cbd6bc-e9a5-4ea4-8071-b7907b0d9553] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 20:56:40.113630   38924 system_pods.go:61] "coredns-6d4b75cb6d-694pb" [087a12fd-1955-4929-9fb7-a11fb62672a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 20:56:40.113637   38924 system_pods.go:61] "etcd-test-preload-867544" [5032dbc5-40d1-43ca-ae93-7106e4c15379] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 20:56:40.113644   38924 system_pods.go:61] "kube-apiserver-test-preload-867544" [8abc59d1-08d7-4149-86fe-58d73a050548] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 20:56:40.113649   38924 system_pods.go:61] "kube-controller-manager-test-preload-867544" [666eceb8-07d7-4b56-b70f-6015aceda199] Running
	I1207 20:56:40.113653   38924 system_pods.go:61] "kube-proxy-mwl87" [9b141c3a-f3df-4bc9-83a0-07e71b53a87b] Running
	I1207 20:56:40.113658   38924 system_pods.go:61] "kube-scheduler-test-preload-867544" [fd21015f-e929-4a94-b5eb-45ab783a683d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 20:56:40.113662   38924 system_pods.go:61] "storage-provisioner" [449929a2-5589-4b0f-8014-61a1bfe21552] Running
	I1207 20:56:40.113668   38924 system_pods.go:74] duration metric: took 10.080033ms to wait for pod list to return data ...
	I1207 20:56:40.113674   38924 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:56:40.120332   38924 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:56:40.120361   38924 node_conditions.go:123] node cpu capacity is 2
	I1207 20:56:40.120372   38924 node_conditions.go:105] duration metric: took 6.694242ms to run NodePressure ...
	I1207 20:56:40.120387   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:56:40.433387   38924 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 20:56:40.440433   38924 kubeadm.go:787] kubelet initialised
	I1207 20:56:40.440458   38924 kubeadm.go:788] duration metric: took 7.042696ms waiting for restarted kubelet to initialise ...
	I1207 20:56:40.440465   38924 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:56:40.450558   38924 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5fv5k" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:40.473186   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "coredns-6d4b75cb6d-5fv5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.473215   38924 pod_ready.go:81] duration metric: took 22.627463ms waiting for pod "coredns-6d4b75cb6d-5fv5k" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:40.473225   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "coredns-6d4b75cb6d-5fv5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.473231   38924 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:40.479623   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.479655   38924 pod_ready.go:81] duration metric: took 6.414544ms waiting for pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:40.479664   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.479670   38924 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:40.485462   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "etcd-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.485481   38924 pod_ready.go:81] duration metric: took 5.800867ms waiting for pod "etcd-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:40.485488   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "etcd-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.485496   38924 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:40.507452   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "kube-apiserver-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.507482   38924 pod_ready.go:81] duration metric: took 21.970704ms waiting for pod "kube-apiserver-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:40.507496   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "kube-apiserver-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.507505   38924 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:40.907396   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.907432   38924 pod_ready.go:81] duration metric: took 399.909086ms waiting for pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:40.907446   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:40.907454   38924 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mwl87" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:41.306852   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "kube-proxy-mwl87" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:41.306880   38924 pod_ready.go:81] duration metric: took 399.415443ms waiting for pod "kube-proxy-mwl87" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:41.306889   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "kube-proxy-mwl87" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:41.306895   38924 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:41.707121   38924 pod_ready.go:97] node "test-preload-867544" hosting pod "kube-scheduler-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:41.707145   38924 pod_ready.go:81] duration metric: took 400.244439ms waiting for pod "kube-scheduler-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	E1207 20:56:41.707154   38924 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-867544" hosting pod "kube-scheduler-test-preload-867544" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:41.707159   38924 pod_ready.go:38] duration metric: took 1.266684885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:56:41.707177   38924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:56:41.719285   38924 ops.go:34] apiserver oom_adj: -16
	I1207 20:56:41.719307   38924 kubeadm.go:640] restartCluster took 21.953550126s
	I1207 20:56:41.719314   38924 kubeadm.go:406] StartCluster complete in 22.002309615s
	I1207 20:56:41.719328   38924 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:56:41.719400   38924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:56:41.720031   38924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:56:41.720243   38924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:56:41.720364   38924 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:56:41.720448   38924 addons.go:69] Setting storage-provisioner=true in profile "test-preload-867544"
	I1207 20:56:41.720462   38924 config.go:182] Loaded profile config "test-preload-867544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1207 20:56:41.720469   38924 addons.go:69] Setting default-storageclass=true in profile "test-preload-867544"
	I1207 20:56:41.720474   38924 addons.go:231] Setting addon storage-provisioner=true in "test-preload-867544"
	W1207 20:56:41.720483   38924 addons.go:240] addon storage-provisioner should already be in state true
	I1207 20:56:41.720486   38924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-867544"
	I1207 20:56:41.720540   38924 host.go:66] Checking if "test-preload-867544" exists ...
	I1207 20:56:41.720934   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:56:41.720975   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:56:41.720982   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:56:41.720885   38924 kapi.go:59] client config for test-preload-867544: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:56:41.721017   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:56:41.725353   38924 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-867544" context rescaled to 1 replicas
	I1207 20:56:41.725389   38924 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 20:56:41.727249   38924 out.go:177] * Verifying Kubernetes components...
	I1207 20:56:41.728624   38924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:56:41.735375   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1207 20:56:41.735582   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I1207 20:56:41.735769   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:56:41.735876   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:56:41.736217   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:56:41.736236   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:56:41.736336   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:56:41.736356   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:56:41.736555   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:56:41.736750   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:56:41.736927   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetState
	I1207 20:56:41.737116   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:56:41.737158   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:56:41.739401   38924 kapi.go:59] client config for test-preload-867544: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/profiles/test-preload-867544/client.key", CAFile:"/home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:56:41.739741   38924 addons.go:231] Setting addon default-storageclass=true in "test-preload-867544"
	W1207 20:56:41.739759   38924 addons.go:240] addon default-storageclass should already be in state true
	I1207 20:56:41.739782   38924 host.go:66] Checking if "test-preload-867544" exists ...
	I1207 20:56:41.740120   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:56:41.740155   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:56:41.751976   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I1207 20:56:41.752348   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:56:41.752801   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:56:41.752824   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:56:41.753128   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:56:41.753310   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetState
	I1207 20:56:41.755117   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:41.757152   38924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:56:41.755640   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I1207 20:56:41.757608   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:56:41.758591   38924 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:56:41.758658   38924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:56:41.758682   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:41.759035   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:56:41.759047   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:56:41.759362   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:56:41.759935   38924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:56:41.759989   38924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:56:41.761860   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:41.762364   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:41.762395   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:41.762514   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:41.762662   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:41.762810   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:41.763030   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:41.775275   38924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I1207 20:56:41.775714   38924 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:56:41.776212   38924 main.go:141] libmachine: Using API Version  1
	I1207 20:56:41.776232   38924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:56:41.776587   38924 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:56:41.776764   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetState
	I1207 20:56:41.778502   38924 main.go:141] libmachine: (test-preload-867544) Calling .DriverName
	I1207 20:56:41.778724   38924 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:56:41.778740   38924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:56:41.778760   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHHostname
	I1207 20:56:41.781841   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:41.782362   38924 main.go:141] libmachine: (test-preload-867544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:4f:1c", ip: ""} in network mk-test-preload-867544: {Iface:virbr1 ExpiryTime:2023-12-07 21:55:57 +0000 UTC Type:0 Mac:52:54:00:46:4f:1c Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-867544 Clientid:01:52:54:00:46:4f:1c}
	I1207 20:56:41.782388   38924 main.go:141] libmachine: (test-preload-867544) DBG | domain test-preload-867544 has defined IP address 192.168.39.150 and MAC address 52:54:00:46:4f:1c in network mk-test-preload-867544
	I1207 20:56:41.782529   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHPort
	I1207 20:56:41.782696   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHKeyPath
	I1207 20:56:41.782808   38924 main.go:141] libmachine: (test-preload-867544) Calling .GetSSHUsername
	I1207 20:56:41.782944   38924 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/test-preload-867544/id_rsa Username:docker}
	I1207 20:56:41.890765   38924 node_ready.go:35] waiting up to 6m0s for node "test-preload-867544" to be "Ready" ...
	I1207 20:56:41.890881   38924 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 20:56:41.901262   38924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:56:41.912587   38924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:56:42.756700   38924 main.go:141] libmachine: Making call to close driver server
	I1207 20:56:42.756724   38924 main.go:141] libmachine: Making call to close driver server
	I1207 20:56:42.756760   38924 main.go:141] libmachine: (test-preload-867544) Calling .Close
	I1207 20:56:42.756819   38924 main.go:141] libmachine: (test-preload-867544) Calling .Close
	I1207 20:56:42.757156   38924 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:56:42.757186   38924 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:56:42.757196   38924 main.go:141] libmachine: Making call to close driver server
	I1207 20:56:42.757206   38924 main.go:141] libmachine: (test-preload-867544) Calling .Close
	I1207 20:56:42.757155   38924 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:56:42.757260   38924 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:56:42.757264   38924 main.go:141] libmachine: (test-preload-867544) DBG | Closing plugin on server side
	I1207 20:56:42.757281   38924 main.go:141] libmachine: (test-preload-867544) DBG | Closing plugin on server side
	I1207 20:56:42.757270   38924 main.go:141] libmachine: Making call to close driver server
	I1207 20:56:42.757358   38924 main.go:141] libmachine: (test-preload-867544) Calling .Close
	I1207 20:56:42.757410   38924 main.go:141] libmachine: (test-preload-867544) DBG | Closing plugin on server side
	I1207 20:56:42.757429   38924 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:56:42.757447   38924 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:56:42.757571   38924 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:56:42.757585   38924 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:56:42.763144   38924 main.go:141] libmachine: Making call to close driver server
	I1207 20:56:42.763159   38924 main.go:141] libmachine: (test-preload-867544) Calling .Close
	I1207 20:56:42.763373   38924 main.go:141] libmachine: (test-preload-867544) DBG | Closing plugin on server side
	I1207 20:56:42.763379   38924 main.go:141] libmachine: Successfully made call to close driver server
	I1207 20:56:42.763412   38924 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 20:56:42.765274   38924 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 20:56:42.766614   38924 addons.go:502] enable addons completed in 1.046268224s: enabled=[storage-provisioner default-storageclass]
	I1207 20:56:44.113379   38924 node_ready.go:58] node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:46.611592   38924 node_ready.go:58] node "test-preload-867544" has status "Ready":"False"
	I1207 20:56:49.111731   38924 node_ready.go:49] node "test-preload-867544" has status "Ready":"True"
	I1207 20:56:49.111757   38924 node_ready.go:38] duration metric: took 7.22096075s waiting for node "test-preload-867544" to be "Ready" ...
	I1207 20:56:49.111766   38924 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:56:49.117511   38924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:49.128171   38924 pod_ready.go:92] pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:49.128198   38924 pod_ready.go:81] duration metric: took 10.662665ms waiting for pod "coredns-6d4b75cb6d-694pb" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:49.128208   38924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:49.133304   38924 pod_ready.go:92] pod "etcd-test-preload-867544" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:49.133322   38924 pod_ready.go:81] duration metric: took 5.108556ms waiting for pod "etcd-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:49.133329   38924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.149370   38924 pod_ready.go:102] pod "kube-apiserver-test-preload-867544" in "kube-system" namespace has status "Ready":"False"
	I1207 20:56:51.653333   38924 pod_ready.go:92] pod "kube-apiserver-test-preload-867544" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:51.653361   38924 pod_ready.go:81] duration metric: took 2.520024241s waiting for pod "kube-apiserver-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.653377   38924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.662746   38924 pod_ready.go:92] pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:51.662768   38924 pod_ready.go:81] duration metric: took 9.381861ms waiting for pod "kube-controller-manager-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.662779   38924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mwl87" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.912467   38924 pod_ready.go:92] pod "kube-proxy-mwl87" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:51.912497   38924 pod_ready.go:81] duration metric: took 249.709627ms waiting for pod "kube-proxy-mwl87" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:51.912510   38924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:52.312461   38924 pod_ready.go:92] pod "kube-scheduler-test-preload-867544" in "kube-system" namespace has status "Ready":"True"
	I1207 20:56:52.312491   38924 pod_ready.go:81] duration metric: took 399.973052ms waiting for pod "kube-scheduler-test-preload-867544" in "kube-system" namespace to be "Ready" ...
	I1207 20:56:52.312506   38924 pod_ready.go:38] duration metric: took 3.200731339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:56:52.312522   38924 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:56:52.312579   38924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:56:52.325534   38924 api_server.go:72] duration metric: took 10.600117057s to wait for apiserver process to appear ...
	I1207 20:56:52.325561   38924 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:56:52.325575   38924 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1207 20:56:52.331077   38924 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1207 20:56:52.332090   38924 api_server.go:141] control plane version: v1.24.4
	I1207 20:56:52.332109   38924 api_server.go:131] duration metric: took 6.541793ms to wait for apiserver health ...
	I1207 20:56:52.332117   38924 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:56:52.514447   38924 system_pods.go:59] 7 kube-system pods found
	I1207 20:56:52.514486   38924 system_pods.go:61] "coredns-6d4b75cb6d-694pb" [087a12fd-1955-4929-9fb7-a11fb62672a3] Running
	I1207 20:56:52.514492   38924 system_pods.go:61] "etcd-test-preload-867544" [5032dbc5-40d1-43ca-ae93-7106e4c15379] Running
	I1207 20:56:52.514499   38924 system_pods.go:61] "kube-apiserver-test-preload-867544" [8abc59d1-08d7-4149-86fe-58d73a050548] Running
	I1207 20:56:52.514505   38924 system_pods.go:61] "kube-controller-manager-test-preload-867544" [666eceb8-07d7-4b56-b70f-6015aceda199] Running
	I1207 20:56:52.514515   38924 system_pods.go:61] "kube-proxy-mwl87" [9b141c3a-f3df-4bc9-83a0-07e71b53a87b] Running
	I1207 20:56:52.514521   38924 system_pods.go:61] "kube-scheduler-test-preload-867544" [fd21015f-e929-4a94-b5eb-45ab783a683d] Running
	I1207 20:56:52.514527   38924 system_pods.go:61] "storage-provisioner" [449929a2-5589-4b0f-8014-61a1bfe21552] Running
	I1207 20:56:52.514535   38924 system_pods.go:74] duration metric: took 182.410862ms to wait for pod list to return data ...
	I1207 20:56:52.514545   38924 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:56:52.712015   38924 default_sa.go:45] found service account: "default"
	I1207 20:56:52.712043   38924 default_sa.go:55] duration metric: took 197.488248ms for default service account to be created ...
	I1207 20:56:52.712055   38924 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:56:52.915686   38924 system_pods.go:86] 7 kube-system pods found
	I1207 20:56:52.915717   38924 system_pods.go:89] "coredns-6d4b75cb6d-694pb" [087a12fd-1955-4929-9fb7-a11fb62672a3] Running
	I1207 20:56:52.915722   38924 system_pods.go:89] "etcd-test-preload-867544" [5032dbc5-40d1-43ca-ae93-7106e4c15379] Running
	I1207 20:56:52.915726   38924 system_pods.go:89] "kube-apiserver-test-preload-867544" [8abc59d1-08d7-4149-86fe-58d73a050548] Running
	I1207 20:56:52.915730   38924 system_pods.go:89] "kube-controller-manager-test-preload-867544" [666eceb8-07d7-4b56-b70f-6015aceda199] Running
	I1207 20:56:52.915734   38924 system_pods.go:89] "kube-proxy-mwl87" [9b141c3a-f3df-4bc9-83a0-07e71b53a87b] Running
	I1207 20:56:52.915737   38924 system_pods.go:89] "kube-scheduler-test-preload-867544" [fd21015f-e929-4a94-b5eb-45ab783a683d] Running
	I1207 20:56:52.915741   38924 system_pods.go:89] "storage-provisioner" [449929a2-5589-4b0f-8014-61a1bfe21552] Running
	I1207 20:56:52.915747   38924 system_pods.go:126] duration metric: took 203.686819ms to wait for k8s-apps to be running ...
	I1207 20:56:52.915753   38924 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:56:52.915801   38924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:56:52.931889   38924 system_svc.go:56] duration metric: took 16.128391ms WaitForService to wait for kubelet.
	I1207 20:56:52.931913   38924 kubeadm.go:581] duration metric: took 11.206500877s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:56:52.931937   38924 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:56:53.111786   38924 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 20:56:53.111813   38924 node_conditions.go:123] node cpu capacity is 2
	I1207 20:56:53.111821   38924 node_conditions.go:105] duration metric: took 179.880312ms to run NodePressure ...
	I1207 20:56:53.111832   38924 start.go:228] waiting for startup goroutines ...
	I1207 20:56:53.111837   38924 start.go:233] waiting for cluster config update ...
	I1207 20:56:53.111845   38924 start.go:242] writing updated cluster config ...
	I1207 20:56:53.112125   38924 ssh_runner.go:195] Run: rm -f paused
	I1207 20:56:53.162853   38924 start.go:600] kubectl: 1.28.4, cluster: 1.24.4 (minor skew: 4)
	I1207 20:56:53.164949   38924 out.go:177] 
	W1207 20:56:53.166534   38924 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.24.4.
	I1207 20:56:53.168024   38924 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1207 20:56:53.169468   38924 out.go:177] * Done! kubectl is now configured to use "test-preload-867544" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 20:55:56 UTC, ends at Thu 2023-12-07 20:56:54 UTC. --
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.148923798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=83c6582a-7cbe-4b3d-8cb5-27cbdddeb124 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.150037219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd5843f2-8d01-4694-8da1-1628a4481496 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.150562017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701982614150461856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=cd5843f2-8d01-4694-8da1-1628a4481496 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.150997239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=901d7335-bd82-42f5-9797-11a27a56a3e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.151089519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=901d7335-bd82-42f5-9797-11a27a56a3e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.151311566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cacdf32e056dbf2697d7c9733c03b8599288da36f3c279be5c426eab9802d449,PodSandboxId:302fdc8129e13959b3a114a4aed4f5285a5ddaa38f3cae450e10dfaceca88ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701982603817436948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-694pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087a12fd-1955-4929-9fb7-a11fb62672a3,},Annotations:map[string]string{io.kubernetes.container.hash: 30f27490,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd00c7c8d56df98b54c2ea7279f4cc4eadbb4b5123194f630f2170230b1bacea,PodSandboxId:8c7eb261bc4e1f02741b753e81e0124e8a445711b3be23a00e83a12806d16b56,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701982601018281947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 449929a2-5589-4b0f-8014-61a1bfe21552,},Annotations:map[string]string{io.kubernetes.container.hash: 547f45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695428199debd3222fe6aa1a531b4aca742ca91729a6c79d8e699c5622ac129a,PodSandboxId:a43d2259128c1cd1748f2758af4660bfc26b69b05eeed91e2def4f4c701b8f0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701982600603319811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwl87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b141c3a-f3df-4bc9-83a0-07e71b53a87b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a42ebd1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace467552daccb1620bb68948ae31811dcba7d11533257e08e23188d6ce6a9f8,PodSandboxId:757d625d15a2a58d431402de13a454da772348c43117908d4b00394ea8a0e636,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701982592797171242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94e3e8831c72c977c2ed8f6ce90715,},Annotations:map
[string]string{io.kubernetes.container.hash: bedcc29d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44489c499386b032a0cad490a46ea09701a44fabb70bf93ea5e8ed596b9ddeb,PodSandboxId:d73a7bd7c3d3550a9d1cf88b01d4d05eb1962cc14592dea6d212228c59175dfd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701982592472070342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acf74d0328cf6a89745d782b64fa590,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349572f0c906050ecd0594a5e8db76d490a359192c34c9f992908b5dba07b48a,PodSandboxId:022562c0ba0c49b9774a31c88c1ad535330de0b47d35106f0081c967016648f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701982592333009764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8b2c28ee7a5250c8c5a4bd08654bab,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5140b856344fa7b4a347772572e8167bd5d6161b655580349167c553c78c163c,PodSandboxId:14f192cbca0f3e9c99202265d7176526581d4e15a95ccf4c7b81fa7dfec459ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701982592112419127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100c403036d550ad68b984b79a36d2c2,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=901d7335-bd82-42f5-9797-11a27a56a3e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.190142560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8aa872ed-3c97-401f-8b67-bd482cb2143a name=/runtime.v1.RuntimeService/Version
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.190211048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8aa872ed-3c97-401f-8b67-bd482cb2143a name=/runtime.v1.RuntimeService/Version
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.191795838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9c0d2cc2-2591-48c7-b95f-35716164a7e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.192284382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701982614192266466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=9c0d2cc2-2591-48c7-b95f-35716164a7e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.193004550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b438b71-041d-405c-9790-190bfd66882b name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.193089147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b438b71-041d-405c-9790-190bfd66882b name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.193668140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cacdf32e056dbf2697d7c9733c03b8599288da36f3c279be5c426eab9802d449,PodSandboxId:302fdc8129e13959b3a114a4aed4f5285a5ddaa38f3cae450e10dfaceca88ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701982603817436948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-694pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087a12fd-1955-4929-9fb7-a11fb62672a3,},Annotations:map[string]string{io.kubernetes.container.hash: 30f27490,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd00c7c8d56df98b54c2ea7279f4cc4eadbb4b5123194f630f2170230b1bacea,PodSandboxId:8c7eb261bc4e1f02741b753e81e0124e8a445711b3be23a00e83a12806d16b56,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701982601018281947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 449929a2-5589-4b0f-8014-61a1bfe21552,},Annotations:map[string]string{io.kubernetes.container.hash: 547f45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695428199debd3222fe6aa1a531b4aca742ca91729a6c79d8e699c5622ac129a,PodSandboxId:a43d2259128c1cd1748f2758af4660bfc26b69b05eeed91e2def4f4c701b8f0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701982600603319811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwl87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b141c3a-f3df-4bc9-83a0-07e71b53a87b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a42ebd1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace467552daccb1620bb68948ae31811dcba7d11533257e08e23188d6ce6a9f8,PodSandboxId:757d625d15a2a58d431402de13a454da772348c43117908d4b00394ea8a0e636,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701982592797171242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94e3e8831c72c977c2ed8f6ce90715,},Annotations:map
[string]string{io.kubernetes.container.hash: bedcc29d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44489c499386b032a0cad490a46ea09701a44fabb70bf93ea5e8ed596b9ddeb,PodSandboxId:d73a7bd7c3d3550a9d1cf88b01d4d05eb1962cc14592dea6d212228c59175dfd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701982592472070342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acf74d0328cf6a89745d782b64fa590,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349572f0c906050ecd0594a5e8db76d490a359192c34c9f992908b5dba07b48a,PodSandboxId:022562c0ba0c49b9774a31c88c1ad535330de0b47d35106f0081c967016648f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701982592333009764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8b2c28ee7a5250c8c5a4bd08654bab,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5140b856344fa7b4a347772572e8167bd5d6161b655580349167c553c78c163c,PodSandboxId:14f192cbca0f3e9c99202265d7176526581d4e15a95ccf4c7b81fa7dfec459ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701982592112419127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100c403036d550ad68b984b79a36d2c2,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b438b71-041d-405c-9790-190bfd66882b name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.210855702Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=1d604d2e-4632-4c2b-bac8-07136a864fe0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.211139191Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:302fdc8129e13959b3a114a4aed4f5285a5ddaa38f3cae450e10dfaceca88ce0,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-694pb,Uid:087a12fd-1955-4929-9fb7-a11fb62672a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982603247471142,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-694pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087a12fd-1955-4929-9fb7-a11fb62672a3,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T20:56:39.007833179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8c7eb261bc4e1f02741b753e81e0124e8a445711b3be23a00e83a12806d16b56,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:449929a2-5589-4b0f-8014-61a1bfe21552,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982600240029190,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 449929a2-5589-4b0f-8014-61a1bfe21552,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-07T20:56:39.007831018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a43d2259128c1cd1748f2758af4660bfc26b69b05eeed91e2def4f4c701b8f0a,Metadata:&PodSandboxMetadata{Name:kube-proxy-mwl87,Uid:9b141c3a-f3df-4bc9-83a0-07e71b53a87b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982599938954360,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mwl87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b141c3a-f3df-4bc9-83a0-07e71b53a87b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T20:56:39.007828713Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d73a7bd7c3d3550a9d1cf88b01d4d05eb1962cc14592dea6d212228c59175dfd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-867544,Uid:3acf74d
0328cf6a89745d782b64fa590,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982591606732295,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acf74d0328cf6a89745d782b64fa590,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3acf74d0328cf6a89745d782b64fa590,kubernetes.io/config.seen: 2023-12-07T20:56:31.016252484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:022562c0ba0c49b9774a31c88c1ad535330de0b47d35106f0081c967016648f2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-867544,Uid:ec8b2c28ee7a5250c8c5a4bd08654bab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982591593610884,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-867544,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8b2c28ee7a5250c8c5a4bd08654bab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec8b2c28ee7a5250c8c5a4bd08654bab,kubernetes.io/config.seen: 2023-12-07T20:56:31.016251378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:757d625d15a2a58d431402de13a454da772348c43117908d4b00394ea8a0e636,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-867544,Uid:cf94e3e8831c72c977c2ed8f6ce90715,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982591589974707,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94e3e8831c72c977c2ed8f6ce90715,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: cf94e3e8831c72c977c2ed8f6ce90715,kubernetes.io/config.seen: 2023-12-07T20
:56:31.022049366Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14f192cbca0f3e9c99202265d7176526581d4e15a95ccf4c7b81fa7dfec459ef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-867544,Uid:100c403036d550ad68b984b79a36d2c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701982591578653851,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100c403036d550ad68b984b79a36d2c2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.150:8443,kubernetes.io/config.hash: 100c403036d550ad68b984b79a36d2c2,kubernetes.io/config.seen: 2023-12-07T20:56:31.016235864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1d604d2e-4632-4c2b-bac8-07136a864fe0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.212861725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d8f3b061-381f-48ee-90e8-72d8d747fbfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.212946431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d8f3b061-381f-48ee-90e8-72d8d747fbfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.213180432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cacdf32e056dbf2697d7c9733c03b8599288da36f3c279be5c426eab9802d449,PodSandboxId:302fdc8129e13959b3a114a4aed4f5285a5ddaa38f3cae450e10dfaceca88ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701982603817436948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-694pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087a12fd-1955-4929-9fb7-a11fb62672a3,},Annotations:map[string]string{io.kubernetes.container.hash: 30f27490,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd00c7c8d56df98b54c2ea7279f4cc4eadbb4b5123194f630f2170230b1bacea,PodSandboxId:8c7eb261bc4e1f02741b753e81e0124e8a445711b3be23a00e83a12806d16b56,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701982601018281947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 449929a2-5589-4b0f-8014-61a1bfe21552,},Annotations:map[string]string{io.kubernetes.container.hash: 547f45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695428199debd3222fe6aa1a531b4aca742ca91729a6c79d8e699c5622ac129a,PodSandboxId:a43d2259128c1cd1748f2758af4660bfc26b69b05eeed91e2def4f4c701b8f0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701982600603319811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwl87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b141c3a-f3df-4bc9-83a0-07e71b53a87b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a42ebd1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace467552daccb1620bb68948ae31811dcba7d11533257e08e23188d6ce6a9f8,PodSandboxId:757d625d15a2a58d431402de13a454da772348c43117908d4b00394ea8a0e636,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701982592797171242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94e3e8831c72c977c2ed8f6ce90715,},Annotations:map
[string]string{io.kubernetes.container.hash: bedcc29d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44489c499386b032a0cad490a46ea09701a44fabb70bf93ea5e8ed596b9ddeb,PodSandboxId:d73a7bd7c3d3550a9d1cf88b01d4d05eb1962cc14592dea6d212228c59175dfd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701982592472070342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acf74d0328cf6a89745d782b64fa590,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349572f0c906050ecd0594a5e8db76d490a359192c34c9f992908b5dba07b48a,PodSandboxId:022562c0ba0c49b9774a31c88c1ad535330de0b47d35106f0081c967016648f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701982592333009764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8b2c28ee7a5250c8c5a4bd08654bab,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5140b856344fa7b4a347772572e8167bd5d6161b655580349167c553c78c163c,PodSandboxId:14f192cbca0f3e9c99202265d7176526581d4e15a95ccf4c7b81fa7dfec459ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701982592112419127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100c403036d550ad68b984b79a36d2c2,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d8f3b061-381f-48ee-90e8-72d8d747fbfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.229259926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2e0bc173-b149-47c5-8b96-7f6d2157fd47 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.229332008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2e0bc173-b149-47c5-8b96-7f6d2157fd47 name=/runtime.v1.RuntimeService/Version
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.230463521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d3f2fdc3-dbb4-41b4-b32d-96a47c9e1b2e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.230999451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701982614230986720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=d3f2fdc3-dbb4-41b4-b32d-96a47c9e1b2e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.231678857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c1bca6d-aeaa-4aa3-a17f-e164242c967e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.231743315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c1bca6d-aeaa-4aa3-a17f-e164242c967e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 20:56:54 test-preload-867544 crio[711]: time="2023-12-07 20:56:54.233351238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cacdf32e056dbf2697d7c9733c03b8599288da36f3c279be5c426eab9802d449,PodSandboxId:302fdc8129e13959b3a114a4aed4f5285a5ddaa38f3cae450e10dfaceca88ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701982603817436948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-694pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 087a12fd-1955-4929-9fb7-a11fb62672a3,},Annotations:map[string]string{io.kubernetes.container.hash: 30f27490,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd00c7c8d56df98b54c2ea7279f4cc4eadbb4b5123194f630f2170230b1bacea,PodSandboxId:8c7eb261bc4e1f02741b753e81e0124e8a445711b3be23a00e83a12806d16b56,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701982601018281947,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 449929a2-5589-4b0f-8014-61a1bfe21552,},Annotations:map[string]string{io.kubernetes.container.hash: 547f45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695428199debd3222fe6aa1a531b4aca742ca91729a6c79d8e699c5622ac129a,PodSandboxId:a43d2259128c1cd1748f2758af4660bfc26b69b05eeed91e2def4f4c701b8f0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701982600603319811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwl87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b141c3a-f3df-4bc9-83a0-07e71b53a87b,},Annotations:map[string]string{io.kubernetes.container.hash: 9a42ebd1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace467552daccb1620bb68948ae31811dcba7d11533257e08e23188d6ce6a9f8,PodSandboxId:757d625d15a2a58d431402de13a454da772348c43117908d4b00394ea8a0e636,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701982592797171242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94e3e8831c72c977c2ed8f6ce90715,},Annotations:map
[string]string{io.kubernetes.container.hash: bedcc29d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44489c499386b032a0cad490a46ea09701a44fabb70bf93ea5e8ed596b9ddeb,PodSandboxId:d73a7bd7c3d3550a9d1cf88b01d4d05eb1962cc14592dea6d212228c59175dfd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701982592472070342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acf74d0328cf6a89745d782b64fa590,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349572f0c906050ecd0594a5e8db76d490a359192c34c9f992908b5dba07b48a,PodSandboxId:022562c0ba0c49b9774a31c88c1ad535330de0b47d35106f0081c967016648f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701982592333009764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8b2c28ee7a5250c8c5a4bd08654bab,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5140b856344fa7b4a347772572e8167bd5d6161b655580349167c553c78c163c,PodSandboxId:14f192cbca0f3e9c99202265d7176526581d4e15a95ccf4c7b81fa7dfec459ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701982592112419127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-867544,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100c403036d550ad68b984b79a36d2c2,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c1bca6d-aeaa-4aa3-a17f-e164242c967e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cacdf32e056db       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   10 seconds ago      Running             coredns                   1                   302fdc8129e13       coredns-6d4b75cb6d-694pb
	bd00c7c8d56df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   8c7eb261bc4e1       storage-provisioner
	695428199debd       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   a43d2259128c1       kube-proxy-mwl87
	ace467552dacc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   757d625d15a2a       etcd-test-preload-867544
	a44489c499386       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   d73a7bd7c3d35       kube-scheduler-test-preload-867544
	349572f0c9060       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   022562c0ba0c4       kube-controller-manager-test-preload-867544
	5140b856344fa       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   14f192cbca0f3       kube-apiserver-test-preload-867544
	
	* 
	* ==> coredns [cacdf32e056dbf2697d7c9733c03b8599288da36f3c279be5c426eab9802d449] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46471 - 35 "HINFO IN 473207890674165014.8037896892025722291. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014973017s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-867544
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-867544
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=test-preload-867544
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_55_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-867544
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:56:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:56:48 +0000   Thu, 07 Dec 2023 20:54:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:56:48 +0000   Thu, 07 Dec 2023 20:54:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:56:48 +0000   Thu, 07 Dec 2023 20:54:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:56:48 +0000   Thu, 07 Dec 2023 20:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    test-preload-867544
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c480087146b4328bbc67056288b83ec
	  System UUID:                5c480087-146b-4328-bbc6-7056288b83ec
	  Boot ID:                    e5815781-c49b-4824-b0df-32f96283b7d2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-694pb                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-test-preload-867544                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-test-preload-867544             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-test-preload-867544    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-mwl87                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-test-preload-867544             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node test-preload-867544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node test-preload-867544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node test-preload-867544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                101s               kubelet          Node test-preload-867544 status is now: NodeReady
	  Normal  RegisteredNode           99s                node-controller  Node test-preload-867544 event: Registered Node test-preload-867544 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x9 over 23s)  kubelet          Node test-preload-867544 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x7 over 23s)  kubelet          Node test-preload-867544 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-867544 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-867544 event: Registered Node test-preload-867544 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066861] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.345961] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.428328] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145374] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.391218] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 7 20:56] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.099885] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.151652] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.111586] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.213130] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +25.057556] systemd-fstab-generator[1089]: Ignoring "noauto" for root device
	[ +10.470302] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.447220] kauditd_printk_skb: 15 callbacks suppressed
	
	* 
	* ==> etcd [ace467552daccb1620bb68948ae31811dcba7d11533257e08e23188d6ce6a9f8] <==
	* {"level":"info","ts":"2023-12-07T20:56:34.330Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"2236e2deb63504cb","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-12-07T20:56:34.333Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-07T20:56:34.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283)"}
	{"level":"info","ts":"2023-12-07T20:56:34.342Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","added-peer-id":"2236e2deb63504cb","added-peer-peer-urls":["https://192.168.39.150:2380"]}
	{"level":"info","ts":"2023-12-07T20:56:34.343Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:56:34.343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:56:34.345Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-07T20:56:34.345Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2023-12-07T20:56:34.345Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2023-12-07T20:56:34.345Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T20:56:34.345Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2236e2deb63504cb","initial-advertise-peer-urls":["https://192.168.39.150:2380"],"listen-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T20:56:36.017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-07T20:56:36.017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgPreVoteResp from 2236e2deb63504cb at term 2"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgVoteResp from 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became leader at term 3"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2236e2deb63504cb elected leader 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:test-preload-867544 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:56:36.018Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:56:36.020Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:56:36.020Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:56:36.020Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:56:36.020Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T20:56:36.021Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.150:2379"}
	
	* 
	* ==> kernel <==
	*  20:56:54 up 1 min,  0 users,  load average: 0.94, 0.28, 0.10
	Linux test-preload-867544 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5140b856344fa7b4a347772572e8167bd5d6161b655580349167c553c78c163c] <==
	* I1207 20:56:38.459736       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1207 20:56:38.459754       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1207 20:56:38.459766       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1207 20:56:38.460178       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1207 20:56:38.460210       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1207 20:56:38.484055       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1207 20:56:38.497845       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1207 20:56:38.637265       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:56:38.637789       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:56:38.655303       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1207 20:56:38.658224       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:56:38.658989       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1207 20:56:38.659262       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1207 20:56:38.660246       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1207 20:56:38.662303       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:56:39.117352       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1207 20:56:39.463878       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 20:56:40.254561       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1207 20:56:40.285144       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1207 20:56:40.361391       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1207 20:56:40.399162       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:56:40.410358       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 20:56:41.287070       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1207 20:56:51.626015       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:56:51.681329       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [349572f0c906050ecd0594a5e8db76d490a359192c34c9f992908b5dba07b48a] <==
	* I1207 20:56:51.614114       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1207 20:56:51.614642       1 shared_informer.go:262] Caches are synced for taint
	I1207 20:56:51.614718       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1207 20:56:51.614780       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-867544. Assuming now as a timestamp.
	I1207 20:56:51.614827       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1207 20:56:51.615029       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1207 20:56:51.615231       1 event.go:294] "Event occurred" object="test-preload-867544" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-867544 event: Registered Node test-preload-867544 in Controller"
	I1207 20:56:51.618575       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1207 20:56:51.635788       1 shared_informer.go:262] Caches are synced for deployment
	I1207 20:56:51.657258       1 shared_informer.go:262] Caches are synced for persistent volume
	I1207 20:56:51.663429       1 shared_informer.go:262] Caches are synced for attach detach
	I1207 20:56:51.668762       1 shared_informer.go:262] Caches are synced for endpoint
	I1207 20:56:51.673629       1 shared_informer.go:262] Caches are synced for daemon sets
	I1207 20:56:51.685215       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1207 20:56:51.686410       1 shared_informer.go:262] Caches are synced for PVC protection
	I1207 20:56:51.687575       1 shared_informer.go:262] Caches are synced for GC
	I1207 20:56:51.690888       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1207 20:56:51.691263       1 shared_informer.go:262] Caches are synced for ephemeral
	I1207 20:56:51.691304       1 shared_informer.go:262] Caches are synced for HPA
	I1207 20:56:51.697218       1 shared_informer.go:262] Caches are synced for job
	I1207 20:56:51.703319       1 shared_informer.go:262] Caches are synced for resource quota
	I1207 20:56:51.737933       1 shared_informer.go:262] Caches are synced for resource quota
	I1207 20:56:52.143292       1 shared_informer.go:262] Caches are synced for garbage collector
	I1207 20:56:52.161964       1 shared_informer.go:262] Caches are synced for garbage collector
	I1207 20:56:52.162030       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [695428199debd3222fe6aa1a531b4aca742ca91729a6c79d8e699c5622ac129a] <==
	* I1207 20:56:41.197574       1 node.go:163] Successfully retrieved node IP: 192.168.39.150
	I1207 20:56:41.197779       1 server_others.go:138] "Detected node IP" address="192.168.39.150"
	I1207 20:56:41.198031       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1207 20:56:41.272170       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1207 20:56:41.272214       1 server_others.go:206] "Using iptables Proxier"
	I1207 20:56:41.272238       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1207 20:56:41.273367       1 server.go:661] "Version info" version="v1.24.4"
	I1207 20:56:41.273405       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:56:41.274778       1 config.go:317] "Starting service config controller"
	I1207 20:56:41.275026       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1207 20:56:41.275083       1 config.go:226] "Starting endpoint slice config controller"
	I1207 20:56:41.275089       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1207 20:56:41.279094       1 config.go:444] "Starting node config controller"
	I1207 20:56:41.279181       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1207 20:56:41.375583       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1207 20:56:41.375622       1 shared_informer.go:262] Caches are synced for service config
	I1207 20:56:41.379236       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a44489c499386b032a0cad490a46ea09701a44fabb70bf93ea5e8ed596b9ddeb] <==
	* I1207 20:56:34.499969       1 serving.go:348] Generated self-signed cert in-memory
	W1207 20:56:38.508543       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 20:56:38.510548       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:56:38.510600       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 20:56:38.510609       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 20:56:38.582010       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1207 20:56:38.582132       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:56:38.589358       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1207 20:56:38.589642       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 20:56:38.589745       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:56:38.589912       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 20:56:38.690308       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:55:56 UTC, ends at Thu 2023-12-07 20:56:54 UTC. --
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: E1207 20:56:39.010628    1095 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-694pb" podUID=087a12fd-1955-4929-9fb7-a11fb62672a3
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090082    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b141c3a-f3df-4bc9-83a0-07e71b53a87b-xtables-lock\") pod \"kube-proxy-mwl87\" (UID: \"9b141c3a-f3df-4bc9-83a0-07e71b53a87b\") " pod="kube-system/kube-proxy-mwl87"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090137    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf8x6\" (UniqueName: \"kubernetes.io/projected/9b141c3a-f3df-4bc9-83a0-07e71b53a87b-kube-api-access-vf8x6\") pod \"kube-proxy-mwl87\" (UID: \"9b141c3a-f3df-4bc9-83a0-07e71b53a87b\") " pod="kube-system/kube-proxy-mwl87"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090162    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/449929a2-5589-4b0f-8014-61a1bfe21552-tmp\") pod \"storage-provisioner\" (UID: \"449929a2-5589-4b0f-8014-61a1bfe21552\") " pod="kube-system/storage-provisioner"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090181    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume\") pod \"coredns-6d4b75cb6d-694pb\" (UID: \"087a12fd-1955-4929-9fb7-a11fb62672a3\") " pod="kube-system/coredns-6d4b75cb6d-694pb"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090199    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcj8\" (UniqueName: \"kubernetes.io/projected/087a12fd-1955-4929-9fb7-a11fb62672a3-kube-api-access-hmcj8\") pod \"coredns-6d4b75cb6d-694pb\" (UID: \"087a12fd-1955-4929-9fb7-a11fb62672a3\") " pod="kube-system/coredns-6d4b75cb6d-694pb"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090219    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b141c3a-f3df-4bc9-83a0-07e71b53a87b-kube-proxy\") pod \"kube-proxy-mwl87\" (UID: \"9b141c3a-f3df-4bc9-83a0-07e71b53a87b\") " pod="kube-system/kube-proxy-mwl87"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090243    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6lj\" (UniqueName: \"kubernetes.io/projected/449929a2-5589-4b0f-8014-61a1bfe21552-kube-api-access-5n6lj\") pod \"storage-provisioner\" (UID: \"449929a2-5589-4b0f-8014-61a1bfe21552\") " pod="kube-system/storage-provisioner"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090261    1095 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b141c3a-f3df-4bc9-83a0-07e71b53a87b-lib-modules\") pod \"kube-proxy-mwl87\" (UID: \"9b141c3a-f3df-4bc9-83a0-07e71b53a87b\") " pod="kube-system/kube-proxy-mwl87"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.090271    1095 reconciler.go:159] "Reconciler: start to sync state"
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.464875    1095 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-config-volume\") pod \"46cbd6bc-e9a5-4ea4-8071-b7907b0d9553\" (UID: \"46cbd6bc-e9a5-4ea4-8071-b7907b0d9553\") "
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.464997    1095 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6crn9\" (UniqueName: \"kubernetes.io/projected/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-kube-api-access-6crn9\") pod \"46cbd6bc-e9a5-4ea4-8071-b7907b0d9553\" (UID: \"46cbd6bc-e9a5-4ea4-8071-b7907b0d9553\") "
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: E1207 20:56:39.465569    1095 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: E1207 20:56:39.465708    1095 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume podName:087a12fd-1955-4929-9fb7-a11fb62672a3 nodeName:}" failed. No retries permitted until 2023-12-07 20:56:39.965620724 +0000 UTC m=+9.100504784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume") pod "coredns-6d4b75cb6d-694pb" (UID: "087a12fd-1955-4929-9fb7-a11fb62672a3") : object "kube-system"/"coredns" not registered
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: W1207 20:56:39.467023    1095 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553/volumes/kubernetes.io~projected/kube-api-access-6crn9: clearQuota called, but quotas disabled
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.467527    1095 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-kube-api-access-6crn9" (OuterVolumeSpecName: "kube-api-access-6crn9") pod "46cbd6bc-e9a5-4ea4-8071-b7907b0d9553" (UID: "46cbd6bc-e9a5-4ea4-8071-b7907b0d9553"). InnerVolumeSpecName "kube-api-access-6crn9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: W1207 20:56:39.467943    1095 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.469012    1095 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-config-volume" (OuterVolumeSpecName: "config-volume") pod "46cbd6bc-e9a5-4ea4-8071-b7907b0d9553" (UID: "46cbd6bc-e9a5-4ea4-8071-b7907b0d9553"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.565664    1095 reconciler.go:384] "Volume detached for volume \"kube-api-access-6crn9\" (UniqueName: \"kubernetes.io/projected/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-kube-api-access-6crn9\") on node \"test-preload-867544\" DevicePath \"\""
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: I1207 20:56:39.565694    1095 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553-config-volume\") on node \"test-preload-867544\" DevicePath \"\""
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: E1207 20:56:39.969044    1095 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 07 20:56:39 test-preload-867544 kubelet[1095]: E1207 20:56:39.969138    1095 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume podName:087a12fd-1955-4929-9fb7-a11fb62672a3 nodeName:}" failed. No retries permitted until 2023-12-07 20:56:40.96912234 +0000 UTC m=+10.104006386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume") pod "coredns-6d4b75cb6d-694pb" (UID: "087a12fd-1955-4929-9fb7-a11fb62672a3") : object "kube-system"/"coredns" not registered
	Dec 07 20:56:40 test-preload-867544 kubelet[1095]: E1207 20:56:40.977272    1095 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 07 20:56:40 test-preload-867544 kubelet[1095]: E1207 20:56:40.977369    1095 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume podName:087a12fd-1955-4929-9fb7-a11fb62672a3 nodeName:}" failed. No retries permitted until 2023-12-07 20:56:42.977351971 +0000 UTC m=+12.112236029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/087a12fd-1955-4929-9fb7-a11fb62672a3-config-volume") pod "coredns-6d4b75cb6d-694pb" (UID: "087a12fd-1955-4929-9fb7-a11fb62672a3") : object "kube-system"/"coredns" not registered
	Dec 07 20:56:43 test-preload-867544 kubelet[1095]: I1207 20:56:43.124833    1095 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=46cbd6bc-e9a5-4ea4-8071-b7907b0d9553 path="/var/lib/kubelet/pods/46cbd6bc-e9a5-4ea4-8071-b7907b0d9553/volumes"
	
	* 
	* ==> storage-provisioner [bd00c7c8d56df98b54c2ea7279f4cc4eadbb4b5123194f630f2170230b1bacea] <==
	* I1207 20:56:41.232035       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-867544 -n test-preload-867544
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-867544 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-867544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-867544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-867544: (1.12174283s)
--- FAIL: TestPreload (253.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (133.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.728425072.exe start -p running-upgrade-738567 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.728425072.exe start -p running-upgrade-738567 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.332602889s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-738567 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-738567 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.721552428s)

                                                
                                                
-- stdout --
	* [running-upgrade-738567] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-738567 in cluster running-upgrade-738567
	* Updating the running kvm2 "running-upgrade-738567" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:04:53.776432   44239 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:04:53.776592   44239 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:04:53.776602   44239 out.go:309] Setting ErrFile to fd 2...
	I1207 21:04:53.776609   44239 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:04:53.776912   44239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:04:53.777622   44239 out.go:303] Setting JSON to false
	I1207 21:04:53.778960   44239 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6440,"bootTime":1701976654,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:04:53.779057   44239 start.go:138] virtualization: kvm guest
	I1207 21:04:53.782413   44239 out.go:177] * [running-upgrade-738567] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:04:53.783766   44239 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:04:53.785129   44239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:04:53.783865   44239 notify.go:220] Checking for updates...
	I1207 21:04:53.787801   44239 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:04:53.789257   44239 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:04:53.790604   44239 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:04:53.792144   44239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:04:53.794020   44239 config.go:182] Loaded profile config "running-upgrade-738567": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1207 21:04:53.794041   44239 start_flags.go:694] config upgrade: Driver=kvm2
	I1207 21:04:53.794054   44239 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c
	I1207 21:04:53.794145   44239 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/running-upgrade-738567/config.json ...
	I1207 21:04:53.794874   44239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:04:53.794946   44239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:04:53.813514   44239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1207 21:04:53.814284   44239 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:04:53.815000   44239 main.go:141] libmachine: Using API Version  1
	I1207 21:04:53.815027   44239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:04:53.815423   44239 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:04:53.815610   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:53.817740   44239 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1207 21:04:53.819001   44239 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:04:53.819318   44239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:04:53.819365   44239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:04:53.838118   44239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I1207 21:04:53.838730   44239 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:04:53.839345   44239 main.go:141] libmachine: Using API Version  1
	I1207 21:04:53.839369   44239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:04:53.839714   44239 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:04:53.839887   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:53.880912   44239 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:04:53.882590   44239 start.go:298] selected driver: kvm2
	I1207 21:04:53.882606   44239 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-738567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.112 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1207 21:04:53.882721   44239 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:04:53.883429   44239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:53.883535   44239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:04:53.902052   44239 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:04:53.902556   44239 cni.go:84] Creating CNI manager for ""
	I1207 21:04:53.902580   44239 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1207 21:04:53.902593   44239 start_flags.go:323] config:
	{Name:running-upgrade-738567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.112 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1207 21:04:53.902813   44239 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:53.904959   44239 out.go:177] * Starting control plane node running-upgrade-738567 in cluster running-upgrade-738567
	I1207 21:04:53.906580   44239 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1207 21:04:54.355732   44239 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1207 21:04:54.355899   44239 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/running-upgrade-738567/config.json ...
	I1207 21:04:54.356069   44239 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356111   44239 cache.go:107] acquiring lock: {Name:mkc4d7e5d37b595f2f268f9bc76ee57c57733bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356093   44239 cache.go:107] acquiring lock: {Name:mk2c9559721cced69fa80399ec867ba938d31132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356177   44239 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:04:54.356161   44239 cache.go:107] acquiring lock: {Name:mkd02fea2e1e57a33cef497d6368498caaf2d77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356190   44239 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.035µs
	I1207 21:04:54.356203   44239 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:04:54.356202   44239 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:04:54.356225   44239 cache.go:107] acquiring lock: {Name:mk9eeda193bf8f6484a5ddbe20468bc38bf1698e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356262   44239 start.go:365] acquiring machines lock for running-upgrade-738567: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:04:54.356292   44239 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1207 21:04:54.356314   44239 start.go:369] acquired machines lock for "running-upgrade-738567" in 31.642µs
	I1207 21:04:54.356319   44239 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1207 21:04:54.356328   44239 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:04:54.356338   44239 fix.go:54] fixHost starting: minikube
	I1207 21:04:54.356383   44239 cache.go:107] acquiring lock: {Name:mkbb6112829f3c995ddc5e7a205e86acb30d7a41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356426   44239 cache.go:107] acquiring lock: {Name:mk899a6c60b4e6dd820eab979e68d5ebeab8a79c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356071   44239 cache.go:107] acquiring lock: {Name:mk5922d7049fbe72309551b789159ea643488bd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:04:54.356482   44239 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1207 21:04:54.356499   44239 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1207 21:04:54.356518   44239 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1207 21:04:54.356605   44239 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1207 21:04:54.356721   44239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:04:54.356760   44239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:04:54.358283   44239 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1207 21:04:54.358304   44239 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1207 21:04:54.358347   44239 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1207 21:04:54.358388   44239 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1207 21:04:54.358540   44239 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1207 21:04:54.358682   44239 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:04:54.358704   44239 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1207 21:04:54.377838   44239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1207 21:04:54.379711   44239 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:04:54.380939   44239 main.go:141] libmachine: Using API Version  1
	I1207 21:04:54.380974   44239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:04:54.381320   44239 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:04:54.381519   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:54.381669   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetState
	I1207 21:04:54.383665   44239 fix.go:102] recreateIfNeeded on running-upgrade-738567: state=Running err=<nil>
	W1207 21:04:54.383694   44239 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:04:54.386097   44239 out.go:177] * Updating the running kvm2 "running-upgrade-738567" VM ...
	I1207 21:04:54.387648   44239 machine.go:88] provisioning docker machine ...
	I1207 21:04:54.387675   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:54.387927   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetMachineName
	I1207 21:04:54.388141   44239 buildroot.go:166] provisioning hostname "running-upgrade-738567"
	I1207 21:04:54.388170   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetMachineName
	I1207 21:04:54.388360   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:54.392922   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.392959   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:54.392980   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.393087   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:54.394062   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:54.394593   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:54.394775   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:54.394972   44239 main.go:141] libmachine: Using SSH client type: native
	I1207 21:04:54.395456   44239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.112 22 <nil> <nil>}
	I1207 21:04:54.395473   44239 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-738567 && echo "running-upgrade-738567" | sudo tee /etc/hostname
	I1207 21:04:54.546171   44239 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-738567
	
	I1207 21:04:54.546206   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:54.549217   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.549682   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:54.549728   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.550009   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:54.550244   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:54.550433   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:54.550589   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:54.550753   44239 main.go:141] libmachine: Using SSH client type: native
	I1207 21:04:54.551112   44239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.112 22 <nil> <nil>}
	I1207 21:04:54.551139   44239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-738567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-738567/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-738567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:04:54.556923   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1207 21:04:54.563995   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1207 21:04:54.580808   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1207 21:04:54.646282   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:04:54.679368   44239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:04:54.679396   44239 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:04:54.679456   44239 buildroot.go:174] setting up certificates
	I1207 21:04:54.679469   44239 provision.go:83] configureAuth start
	I1207 21:04:54.679483   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetMachineName
	I1207 21:04:54.679707   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetIP
	I1207 21:04:54.685189   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:54.685703   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.685736   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:54.685758   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.688595   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.689041   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:54.689073   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:54.689326   44239 provision.go:138] copyHostCerts
	I1207 21:04:54.689492   44239 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:04:54.689510   44239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:04:54.689663   44239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:04:54.690630   44239 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:04:54.690645   44239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:04:54.690679   44239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:04:54.690810   44239 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:04:54.690823   44239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:04:54.690853   44239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:04:54.690931   44239 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-738567 san=[192.168.83.112 192.168.83.112 localhost 127.0.0.1 minikube running-upgrade-738567]
	I1207 21:04:54.692729   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1207 21:04:54.700166   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1207 21:04:54.700192   44239 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 344.114171ms
	I1207 21:04:54.700205   44239 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1207 21:04:54.734543   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1207 21:04:54.736632   44239 cache.go:162] opening:  /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1207 21:04:55.015412   44239 provision.go:172] copyRemoteCerts
	I1207 21:04:55.015493   44239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:04:55.015521   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:55.018994   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.019335   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:55.019382   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.019605   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:55.019991   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:55.020201   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:55.020431   44239 sshutil.go:53] new ssh client: &{IP:192.168.83.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/running-upgrade-738567/id_rsa Username:docker}
	I1207 21:04:55.113827   44239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:04:55.146077   44239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:04:55.165539   44239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:04:55.193564   44239 provision.go:86] duration metric: configureAuth took 514.065788ms
	I1207 21:04:55.193599   44239 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:04:55.194060   44239 config.go:182] Loaded profile config "running-upgrade-738567": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1207 21:04:55.194250   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:55.197689   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.198020   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:55.198051   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.199037   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:55.199289   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:55.199468   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:55.199654   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:55.199828   44239 main.go:141] libmachine: Using SSH client type: native
	I1207 21:04:55.200127   44239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.112 22 <nil> <nil>}
	I1207 21:04:55.200143   44239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:04:55.363192   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1207 21:04:55.363227   44239 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.00718635s
	I1207 21:04:55.363244   44239 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1207 21:04:55.400533   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1207 21:04:55.400586   44239 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 1.044160391s
	I1207 21:04:55.400704   44239 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1207 21:04:55.613453   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1207 21:04:55.613496   44239 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.257115923s
	I1207 21:04:55.613550   44239 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1207 21:04:55.631609   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1207 21:04:55.631638   44239 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.275527457s
	I1207 21:04:55.631652   44239 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1207 21:04:55.843745   44239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:04:55.843774   44239 machine.go:91] provisioned docker machine in 1.456108329s
	I1207 21:04:55.843786   44239 start.go:300] post-start starting for "running-upgrade-738567" (driver="kvm2")
	I1207 21:04:55.843798   44239 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:04:55.843819   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:55.844195   44239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:04:55.844229   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:55.847541   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.848005   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:55.848037   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:55.848316   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:55.848538   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:55.848680   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:55.848844   44239 sshutil.go:53] new ssh client: &{IP:192.168.83.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/running-upgrade-738567/id_rsa Username:docker}
	I1207 21:04:55.949106   44239 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:04:55.969167   44239 info.go:137] Remote host: Buildroot 2019.02.7
	I1207 21:04:55.969191   44239 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:04:55.969244   44239 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:04:55.969336   44239 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:04:55.969514   44239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:04:55.980794   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1207 21:04:55.980834   44239 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.624673426s
	I1207 21:04:55.980849   44239 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1207 21:04:55.983913   44239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:04:56.025026   44239 start.go:303] post-start completed in 181.225145ms
	I1207 21:04:56.025111   44239 fix.go:56] fixHost completed within 1.66877512s
	I1207 21:04:56.025144   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:56.032305   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.032826   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:56.032882   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:56.032918   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.037178   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:56.037517   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:56.038009   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:56.038224   44239 main.go:141] libmachine: Using SSH client type: native
	I1207 21:04:56.038638   44239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.112 22 <nil> <nil>}
	I1207 21:04:56.038662   44239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1207 21:04:56.180184   44239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983096.175879051
	
	I1207 21:04:56.180214   44239 fix.go:206] guest clock: 1701983096.175879051
	I1207 21:04:56.180222   44239 fix.go:219] Guest: 2023-12-07 21:04:56.175879051 +0000 UTC Remote: 2023-12-07 21:04:56.02512734 +0000 UTC m=+2.305814056 (delta=150.751711ms)
	I1207 21:04:56.180243   44239 fix.go:190] guest clock delta is within tolerance: 150.751711ms
	I1207 21:04:56.180250   44239 start.go:83] releasing machines lock for "running-upgrade-738567", held for 1.823928294s
	I1207 21:04:56.180273   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:56.180546   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetIP
	I1207 21:04:56.183835   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.184282   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:56.184314   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.184603   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:56.185175   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:56.185428   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .DriverName
	I1207 21:04:56.185516   44239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:04:56.185554   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:56.185801   44239 ssh_runner.go:195] Run: cat /version.json
	I1207 21:04:56.185829   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHHostname
	I1207 21:04:56.189738   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.190122   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:56.190144   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.191390   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.191416   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:3b:fd", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:03:20 +0000 UTC Type:0 Mac:52:54:00:0f:3b:fd Iaid: IPaddr:192.168.83.112 Prefix:24 Hostname:running-upgrade-738567 Clientid:01:52:54:00:0f:3b:fd}
	I1207 21:04:56.191440   44239 main.go:141] libmachine: (running-upgrade-738567) DBG | domain running-upgrade-738567 has defined IP address 192.168.83.112 and MAC address 52:54:00:0f:3b:fd in network minikube-net
	I1207 21:04:56.191519   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:56.191728   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:56.191842   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:56.191882   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHPort
	I1207 21:04:56.192043   44239 sshutil.go:53] new ssh client: &{IP:192.168.83.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/running-upgrade-738567/id_rsa Username:docker}
	I1207 21:04:56.192577   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHKeyPath
	I1207 21:04:56.192793   44239 main.go:141] libmachine: (running-upgrade-738567) Calling .GetSSHUsername
	I1207 21:04:56.192931   44239 sshutil.go:53] new ssh client: &{IP:192.168.83.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/running-upgrade-738567/id_rsa Username:docker}
	W1207 21:04:56.312800   44239 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1207 21:04:56.576630   44239 cache.go:157] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1207 21:04:56.576661   44239 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.220442124s
	I1207 21:04:56.576675   44239 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1207 21:04:56.576695   44239 cache.go:87] Successfully saved all images to host disk.
	I1207 21:04:56.576751   44239 ssh_runner.go:195] Run: systemctl --version
	I1207 21:04:56.584032   44239 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:04:56.691375   44239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:04:56.698247   44239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:04:56.698363   44239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:04:56.704706   44239 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 21:04:56.704731   44239 start.go:475] detecting cgroup driver to use...
	I1207 21:04:56.704795   44239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:04:56.717026   44239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:04:56.729046   44239 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:04:56.729113   44239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:04:56.739287   44239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:04:56.748635   44239 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1207 21:04:56.762467   44239 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1207 21:04:56.762593   44239 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:04:56.939326   44239 docker.go:219] disabling docker service ...
	I1207 21:04:56.939407   44239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:04:57.971430   44239 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.031996523s)
	I1207 21:04:57.971498   44239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:04:57.999301   44239 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:04:58.219036   44239 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:04:58.379166   44239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:04:58.391677   44239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:04:58.415951   44239 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:04:58.416018   44239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:04:58.426837   44239 out.go:177] 
	W1207 21:04:58.428319   44239 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1207 21:04:58.428341   44239 out.go:239] * 
	* 
	W1207 21:04:58.429441   44239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:04:58.430729   44239 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-738567 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-07 21:04:58.449442628 +0000 UTC m=+3822.617587595
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-738567 -n running-upgrade-738567
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-738567 -n running-upgrade-738567: exit status 4 (976.831963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:04:59.388577   44506 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-738567" does not appear in /home/jenkins/minikube-integration/17719-9628/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-738567" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-738567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-738567
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-738567: (1.265946537s)
--- FAIL: TestRunningBinaryUpgrade (133.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (324.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.4137214122.exe start -p stopped-upgrade-099448 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1207 21:01:41.700297   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.4137214122.exe start -p stopped-upgrade-099448 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m16.71430823s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.4137214122.exe -p stopped-upgrade-099448 stop
E1207 21:04:28.942044   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.4137214122.exe -p stopped-upgrade-099448 stop: (1m32.763655738s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-099448 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1207 21:06:05.939416   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-099448 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m34.638245942s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-099448] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-099448 in cluster stopped-upgrade-099448
	* Restarting existing kvm2 VM for "stopped-upgrade-099448" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:05:27.572036   47139 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:05:27.572375   47139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:05:27.572389   47139 out.go:309] Setting ErrFile to fd 2...
	I1207 21:05:27.572397   47139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:05:27.572668   47139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:05:27.573316   47139 out.go:303] Setting JSON to false
	I1207 21:05:27.574465   47139 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6474,"bootTime":1701976654,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:05:27.574534   47139 start.go:138] virtualization: kvm guest
	I1207 21:05:27.577169   47139 out.go:177] * [stopped-upgrade-099448] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:05:27.578860   47139 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:05:27.578867   47139 notify.go:220] Checking for updates...
	I1207 21:05:27.580375   47139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:05:27.581970   47139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:05:27.583493   47139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:05:27.585036   47139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:05:27.586440   47139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:05:27.588088   47139 config.go:182] Loaded profile config "stopped-upgrade-099448": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1207 21:05:27.588102   47139 start_flags.go:694] config upgrade: Driver=kvm2
	I1207 21:05:27.588112   47139 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c
	I1207 21:05:27.588173   47139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/stopped-upgrade-099448/config.json ...
	I1207 21:05:27.588703   47139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:05:27.588777   47139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:05:27.606311   47139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I1207 21:05:27.606810   47139 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:05:27.607569   47139 main.go:141] libmachine: Using API Version  1
	I1207 21:05:27.607607   47139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:05:27.608010   47139 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:05:27.608213   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:05:27.610531   47139 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1207 21:05:27.611858   47139 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:05:27.612329   47139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:05:27.612376   47139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:05:27.632529   47139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36081
	I1207 21:05:27.633011   47139 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:05:27.633702   47139 main.go:141] libmachine: Using API Version  1
	I1207 21:05:27.633723   47139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:05:27.634100   47139 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:05:27.634311   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:05:27.678504   47139 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:05:27.679818   47139 start.go:298] selected driver: kvm2
	I1207 21:05:27.679839   47139 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-099448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.121 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1207 21:05:27.679978   47139 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:05:27.680984   47139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:27.681074   47139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:05:27.699149   47139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:05:27.699617   47139 cni.go:84] Creating CNI manager for ""
	I1207 21:05:27.699639   47139 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1207 21:05:27.699651   47139 start_flags.go:323] config:
	{Name:stopped-upgrade-099448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.121 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1207 21:05:27.699875   47139 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:27.702632   47139 out.go:177] * Starting control plane node stopped-upgrade-099448 in cluster stopped-upgrade-099448
	I1207 21:05:27.704008   47139 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1207 21:05:28.146276   47139 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1207 21:05:28.146622   47139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/stopped-upgrade-099448/config.json ...
	I1207 21:05:28.147131   47139 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147233   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:05:28.147251   47139 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 150.869µs
	I1207 21:05:28.147267   47139 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:05:28.147292   47139 cache.go:107] acquiring lock: {Name:mkc4d7e5d37b595f2f268f9bc76ee57c57733bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147354   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1207 21:05:28.147367   47139 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 79.663µs
	I1207 21:05:28.147379   47139 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1207 21:05:28.147398   47139 cache.go:107] acquiring lock: {Name:mkbb6112829f3c995ddc5e7a205e86acb30d7a41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147442   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1207 21:05:28.147455   47139 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 63.451µs
	I1207 21:05:28.147471   47139 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1207 21:05:28.147485   47139 cache.go:107] acquiring lock: {Name:mk5922d7049fbe72309551b789159ea643488bd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147529   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1207 21:05:28.147544   47139 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 60.701µs
	I1207 21:05:28.147558   47139 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1207 21:05:28.147586   47139 cache.go:107] acquiring lock: {Name:mkd02fea2e1e57a33cef497d6368498caaf2d77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147635   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1207 21:05:28.147652   47139 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 78.577µs
	I1207 21:05:28.147660   47139 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1207 21:05:28.147674   47139 cache.go:107] acquiring lock: {Name:mk2c9559721cced69fa80399ec867ba938d31132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147725   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1207 21:05:28.147738   47139 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 66µs
	I1207 21:05:28.147750   47139 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1207 21:05:28.147763   47139 cache.go:107] acquiring lock: {Name:mk9eeda193bf8f6484a5ddbe20468bc38bf1698e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147809   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1207 21:05:28.147821   47139 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 60.083µs
	I1207 21:05:28.147830   47139 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1207 21:05:28.147845   47139 cache.go:107] acquiring lock: {Name:mk899a6c60b4e6dd820eab979e68d5ebeab8a79c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:05:28.147908   47139 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1207 21:05:28.147926   47139 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 82.189µs
	I1207 21:05:28.147940   47139 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1207 21:05:28.147948   47139 cache.go:87] Successfully saved all images to host disk.
	I1207 21:05:28.179101   47139 start.go:365] acquiring machines lock for stopped-upgrade-099448: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:06:17.490902   47139 start.go:369] acquired machines lock for "stopped-upgrade-099448" in 49.311726229s
	I1207 21:06:17.490960   47139 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:06:17.490971   47139 fix.go:54] fixHost starting: minikube
	I1207 21:06:17.491412   47139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:06:17.491464   47139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:06:17.508370   47139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I1207 21:06:17.508761   47139 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:06:17.509250   47139 main.go:141] libmachine: Using API Version  1
	I1207 21:06:17.509272   47139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:06:17.509634   47139 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:06:17.509853   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:06:17.510046   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetState
	I1207 21:06:17.511766   47139 fix.go:102] recreateIfNeeded on stopped-upgrade-099448: state=Stopped err=<nil>
	I1207 21:06:17.511804   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	W1207 21:06:17.511969   47139 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:06:17.514353   47139 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-099448" ...
	I1207 21:06:17.516049   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .Start
	I1207 21:06:17.516252   47139 main.go:141] libmachine: (stopped-upgrade-099448) Ensuring networks are active...
	I1207 21:06:17.516976   47139 main.go:141] libmachine: (stopped-upgrade-099448) Ensuring network default is active
	I1207 21:06:17.517397   47139 main.go:141] libmachine: (stopped-upgrade-099448) Ensuring network minikube-net is active
	I1207 21:06:17.517794   47139 main.go:141] libmachine: (stopped-upgrade-099448) Getting domain xml...
	I1207 21:06:17.518547   47139 main.go:141] libmachine: (stopped-upgrade-099448) Creating domain...
	I1207 21:06:18.872438   47139 main.go:141] libmachine: (stopped-upgrade-099448) Waiting to get IP...
	I1207 21:06:18.873433   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:18.873969   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:18.874062   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:18.873939   47727 retry.go:31] will retry after 203.047383ms: waiting for machine to come up
	I1207 21:06:19.078336   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:19.078865   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:19.078889   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:19.078818   47727 retry.go:31] will retry after 276.995851ms: waiting for machine to come up
	I1207 21:06:19.357065   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:19.357612   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:19.357646   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:19.357574   47727 retry.go:31] will retry after 463.92602ms: waiting for machine to come up
	I1207 21:06:19.823223   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:19.823799   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:19.823830   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:19.823756   47727 retry.go:31] will retry after 401.187749ms: waiting for machine to come up
	I1207 21:06:20.226528   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:20.227174   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:20.227203   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:20.227077   47727 retry.go:31] will retry after 751.233911ms: waiting for machine to come up
	I1207 21:06:20.980258   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:20.980770   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:20.980795   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:20.980721   47727 retry.go:31] will retry after 740.719118ms: waiting for machine to come up
	I1207 21:06:21.723221   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:21.723735   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:21.723770   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:21.723661   47727 retry.go:31] will retry after 1.11494893s: waiting for machine to come up
	I1207 21:06:22.840209   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:22.840686   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:22.840716   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:22.840619   47727 retry.go:31] will retry after 1.196822615s: waiting for machine to come up
	I1207 21:06:24.039082   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:24.039535   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:24.039565   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:24.039482   47727 retry.go:31] will retry after 1.170253027s: waiting for machine to come up
	I1207 21:06:25.210935   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:25.211470   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:25.211507   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:25.211426   47727 retry.go:31] will retry after 1.824383212s: waiting for machine to come up
	I1207 21:06:27.038696   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:27.039174   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:27.039209   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:27.039125   47727 retry.go:31] will retry after 2.188863525s: waiting for machine to come up
	I1207 21:06:29.229934   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:29.230396   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:29.230422   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:29.230352   47727 retry.go:31] will retry after 2.637469064s: waiting for machine to come up
	I1207 21:06:31.871190   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:31.871710   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:31.871738   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:31.871671   47727 retry.go:31] will retry after 2.743911863s: waiting for machine to come up
	I1207 21:06:34.617376   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:34.617853   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:34.617885   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:34.617798   47727 retry.go:31] will retry after 4.247743529s: waiting for machine to come up
	I1207 21:06:38.867117   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:38.867577   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:38.867599   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:38.867533   47727 retry.go:31] will retry after 5.0540097s: waiting for machine to come up
	I1207 21:06:43.923282   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:43.923870   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:43.923903   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:43.923823   47727 retry.go:31] will retry after 5.419417224s: waiting for machine to come up
	I1207 21:06:49.344552   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:49.345114   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | unable to find current IP address of domain stopped-upgrade-099448 in network minikube-net
	I1207 21:06:49.345142   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | I1207 21:06:49.345068   47727 retry.go:31] will retry after 10.379232353s: waiting for machine to come up
	I1207 21:06:59.727072   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.727530   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has current primary IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.727567   47139 main.go:141] libmachine: (stopped-upgrade-099448) Found IP for machine: 192.168.83.121
	I1207 21:06:59.727580   47139 main.go:141] libmachine: (stopped-upgrade-099448) Reserving static IP address...
	I1207 21:06:59.727984   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "stopped-upgrade-099448", mac: "52:54:00:6e:f8:1a", ip: "192.168.83.121"} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:06:59.728019   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-099448", mac: "52:54:00:6e:f8:1a", ip: "192.168.83.121"}
	I1207 21:06:59.728031   47139 main.go:141] libmachine: (stopped-upgrade-099448) Reserved static IP address: 192.168.83.121
	I1207 21:06:59.728041   47139 main.go:141] libmachine: (stopped-upgrade-099448) Waiting for SSH to be available...
	I1207 21:06:59.728051   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | Getting to WaitForSSH function...
	I1207 21:06:59.730231   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.730590   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:06:59.730634   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.730725   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | Using SSH client type: external
	I1207 21:06:59.730759   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa (-rw-------)
	I1207 21:06:59.730797   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:06:59.730819   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | About to run SSH command:
	I1207 21:06:59.730837   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | exit 0
	I1207 21:06:59.862449   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | SSH cmd err, output: <nil>: 
	I1207 21:06:59.862861   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetConfigRaw
	I1207 21:06:59.863464   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetIP
	I1207 21:06:59.865809   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.866161   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:06:59.866192   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.866419   47139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/stopped-upgrade-099448/config.json ...
	I1207 21:06:59.866629   47139 machine.go:88] provisioning docker machine ...
	I1207 21:06:59.866647   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:06:59.866835   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetMachineName
	I1207 21:06:59.867064   47139 buildroot.go:166] provisioning hostname "stopped-upgrade-099448"
	I1207 21:06:59.867090   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetMachineName
	I1207 21:06:59.867271   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:06:59.869437   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.869780   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:06:59.869808   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.869875   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:06:59.870041   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:06:59.870182   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:06:59.870290   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:06:59.870485   47139 main.go:141] libmachine: Using SSH client type: native
	I1207 21:06:59.870839   47139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.121 22 <nil> <nil>}
	I1207 21:06:59.870860   47139 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-099448 && echo "stopped-upgrade-099448" | sudo tee /etc/hostname
	I1207 21:06:59.993322   47139 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-099448
	
	I1207 21:06:59.993356   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:06:59.995747   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.996145   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:06:59.996181   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:06:59.996320   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:06:59.996495   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:06:59.996634   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:06:59.996750   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:06:59.996914   47139 main.go:141] libmachine: Using SSH client type: native
	I1207 21:06:59.997224   47139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.121 22 <nil> <nil>}
	I1207 21:06:59.997241   47139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-099448' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-099448/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-099448' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:00.114826   47139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:00.114853   47139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:00.114869   47139 buildroot.go:174] setting up certificates
	I1207 21:07:00.114879   47139 provision.go:83] configureAuth start
	I1207 21:07:00.114887   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetMachineName
	I1207 21:07:00.115186   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetIP
	I1207 21:07:00.117963   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.118344   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:00.118374   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.118569   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:00.120726   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.121085   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:00.121114   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.121212   47139 provision.go:138] copyHostCerts
	I1207 21:07:00.121272   47139 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:00.121287   47139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:00.121360   47139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:00.121477   47139 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:00.121487   47139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:00.121515   47139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:00.121570   47139 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:00.121576   47139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:00.121595   47139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:00.121645   47139 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-099448 san=[192.168.83.121 192.168.83.121 localhost 127.0.0.1 minikube stopped-upgrade-099448]
	I1207 21:07:00.288945   47139 provision.go:172] copyRemoteCerts
	I1207 21:07:00.289007   47139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:00.289030   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:00.291661   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.292046   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:00.292076   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.292255   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:00.292460   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:00.292636   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:00.292792   47139 sshutil.go:53] new ssh client: &{IP:192.168.83.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa Username:docker}
	I1207 21:07:00.377287   47139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:00.391483   47139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:07:00.405606   47139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:07:00.419232   47139 provision.go:86] duration metric: configureAuth took 304.331329ms
	I1207 21:07:00.419266   47139 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:00.419452   47139 config.go:182] Loaded profile config "stopped-upgrade-099448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1207 21:07:00.419536   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:00.422463   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.422801   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:00.422836   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:00.422978   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:00.423177   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:00.423368   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:00.423525   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:00.423723   47139 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:00.424090   47139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.121 22 <nil> <nil>}
	I1207 21:07:00.424115   47139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:01.370433   47139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:01.370462   47139 machine.go:91] provisioned docker machine in 1.503820472s
	I1207 21:07:01.370499   47139 start.go:300] post-start starting for "stopped-upgrade-099448" (driver="kvm2")
	I1207 21:07:01.370518   47139 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:01.370544   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:07:01.370859   47139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:01.370894   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:01.373944   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.374326   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:01.374355   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.374532   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:01.374718   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:01.374889   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:01.375024   47139 sshutil.go:53] new ssh client: &{IP:192.168.83.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa Username:docker}
	I1207 21:07:01.456191   47139 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:01.460172   47139 info.go:137] Remote host: Buildroot 2019.02.7
	I1207 21:07:01.460203   47139 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:01.460269   47139 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:01.460337   47139 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:01.460417   47139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:01.465568   47139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:01.478644   47139 start.go:303] post-start completed in 108.127686ms
	I1207 21:07:01.478664   47139 fix.go:56] fixHost completed within 43.987694015s
	I1207 21:07:01.478684   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:01.481472   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.481829   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:01.481855   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.482077   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:01.482304   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:01.482445   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:01.482575   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:01.482697   47139 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:01.483059   47139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.83.121 22 <nil> <nil>}
	I1207 21:07:01.483072   47139 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1207 21:07:01.598307   47139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983221.564794982
	
	I1207 21:07:01.598329   47139 fix.go:206] guest clock: 1701983221.564794982
	I1207 21:07:01.598336   47139 fix.go:219] Guest: 2023-12-07 21:07:01.564794982 +0000 UTC Remote: 2023-12-07 21:07:01.478667813 +0000 UTC m=+93.966585568 (delta=86.127169ms)
	I1207 21:07:01.598357   47139 fix.go:190] guest clock delta is within tolerance: 86.127169ms
	I1207 21:07:01.598363   47139 start.go:83] releasing machines lock for "stopped-upgrade-099448", held for 44.107426496s
	I1207 21:07:01.598397   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:07:01.598646   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetIP
	I1207 21:07:01.601422   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.601860   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:01.601901   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.602079   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:07:01.602685   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:07:01.602874   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .DriverName
	I1207 21:07:01.602965   47139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:01.603002   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:01.603087   47139 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:01.603112   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHHostname
	I1207 21:07:01.605820   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.606087   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.606239   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:01.606263   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.606396   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:01.606540   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:f8:1a", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-12-07 22:06:44 +0000 UTC Type:0 Mac:52:54:00:6e:f8:1a Iaid: IPaddr:192.168.83.121 Prefix:24 Hostname:stopped-upgrade-099448 Clientid:01:52:54:00:6e:f8:1a}
	I1207 21:07:01.606552   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:01.606572   47139 main.go:141] libmachine: (stopped-upgrade-099448) DBG | domain stopped-upgrade-099448 has defined IP address 192.168.83.121 and MAC address 52:54:00:6e:f8:1a in network minikube-net
	I1207 21:07:01.606695   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:01.606710   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHPort
	I1207 21:07:01.606860   47139 sshutil.go:53] new ssh client: &{IP:192.168.83.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa Username:docker}
	I1207 21:07:01.606960   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHKeyPath
	I1207 21:07:01.607115   47139 main.go:141] libmachine: (stopped-upgrade-099448) Calling .GetSSHUsername
	I1207 21:07:01.607260   47139 sshutil.go:53] new ssh client: &{IP:192.168.83.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/stopped-upgrade-099448/id_rsa Username:docker}
	W1207 21:07:01.722336   47139 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1207 21:07:01.722418   47139 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:01.727312   47139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:01.775815   47139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:01.781832   47139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:01.781919   47139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:01.788557   47139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 21:07:01.788578   47139 start.go:475] detecting cgroup driver to use...
	I1207 21:07:01.788640   47139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:01.798579   47139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:01.807346   47139 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:01.807411   47139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:01.815365   47139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:01.823044   47139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1207 21:07:01.830908   47139 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1207 21:07:01.830978   47139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:01.914931   47139 docker.go:219] disabling docker service ...
	I1207 21:07:01.915001   47139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:01.926831   47139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:01.934717   47139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:02.017765   47139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:02.111123   47139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:02.119257   47139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:02.129970   47139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:07:02.130037   47139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:02.138281   47139 out.go:177] 
	W1207 21:07:02.139749   47139 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1207 21:07:02.139772   47139 out.go:239] * 
	* 
	W1207 21:07:02.140667   47139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:07:02.141812   47139 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-099448 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (324.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (106.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-763966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1207 21:06:41.700425   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-763966 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.118511963s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-763966] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-763966 in cluster pause-763966
	* Updating the running kvm2 "pause-763966" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-763966" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:06:41.742269   47885 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:06:41.742514   47885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:06:41.742527   47885 out.go:309] Setting ErrFile to fd 2...
	I1207 21:06:41.742535   47885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:06:41.742859   47885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:06:41.743612   47885 out.go:303] Setting JSON to false
	I1207 21:06:41.744902   47885 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6548,"bootTime":1701976654,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:06:41.744982   47885 start.go:138] virtualization: kvm guest
	I1207 21:06:41.747608   47885 out.go:177] * [pause-763966] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:06:41.749619   47885 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:06:41.750983   47885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:06:41.749653   47885 notify.go:220] Checking for updates...
	I1207 21:06:41.752521   47885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:06:41.754181   47885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:06:41.755507   47885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:06:41.756838   47885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:06:41.758734   47885 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:06:41.759399   47885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:06:41.759449   47885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:06:41.779736   47885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I1207 21:06:41.780248   47885 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:06:41.780919   47885 main.go:141] libmachine: Using API Version  1
	I1207 21:06:41.780949   47885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:06:41.781384   47885 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:06:41.781686   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:06:41.782053   47885 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:06:41.782375   47885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:06:41.782422   47885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:06:41.797972   47885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I1207 21:06:41.798507   47885 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:06:41.799093   47885 main.go:141] libmachine: Using API Version  1
	I1207 21:06:41.799119   47885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:06:41.799551   47885 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:06:41.799735   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:06:41.844787   47885 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:06:41.846247   47885 start.go:298] selected driver: kvm2
	I1207 21:06:41.846262   47885 start.go:902] validating driver "kvm2" against &{Name:pause-763966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:06:41.846440   47885 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:06:41.846744   47885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:06:41.846819   47885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:06:41.862653   47885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:06:41.863508   47885 cni.go:84] Creating CNI manager for ""
	I1207 21:06:41.863525   47885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:06:41.863547   47885 start_flags.go:323] config:
	{Name:pause-763966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-763966 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:06:41.863776   47885 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:06:41.865612   47885 out.go:177] * Starting control plane node pause-763966 in cluster pause-763966
	I1207 21:06:41.867062   47885 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:06:41.867106   47885 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:06:41.867114   47885 cache.go:56] Caching tarball of preloaded images
	I1207 21:06:41.867199   47885 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:06:41.867213   47885 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:06:41.867326   47885 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/config.json ...
	I1207 21:06:41.867513   47885 start.go:365] acquiring machines lock for pause-763966: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:07:30.167457   47885 start.go:369] acquired machines lock for "pause-763966" in 48.299920182s
	I1207 21:07:30.167512   47885 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:07:30.167523   47885 fix.go:54] fixHost starting: 
	I1207 21:07:30.167890   47885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:30.167939   47885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:30.184020   47885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1207 21:07:30.184435   47885 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:30.184906   47885 main.go:141] libmachine: Using API Version  1
	I1207 21:07:30.184935   47885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:30.185309   47885 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:30.185514   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.185686   47885 main.go:141] libmachine: (pause-763966) Calling .GetState
	I1207 21:07:30.187354   47885 fix.go:102] recreateIfNeeded on pause-763966: state=Running err=<nil>
	W1207 21:07:30.187390   47885 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:07:30.189755   47885 out.go:177] * Updating the running kvm2 "pause-763966" VM ...
	I1207 21:07:30.191196   47885 machine.go:88] provisioning docker machine ...
	I1207 21:07:30.191218   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.191434   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191591   47885 buildroot.go:166] provisioning hostname "pause-763966"
	I1207 21:07:30.191615   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191775   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.194611   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195060   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.195087   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195229   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.195414   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195577   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195700   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.195847   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.196172   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.196186   47885 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-763966 && echo "pause-763966" | sudo tee /etc/hostname
	I1207 21:07:30.339851   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-763966
	
	I1207 21:07:30.339883   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.342876   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.343366   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343576   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.343772   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.343982   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.344187   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.344380   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.344864   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.344891   47885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763966/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:30.463538   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:30.463567   47885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:30.463609   47885 buildroot.go:174] setting up certificates
	I1207 21:07:30.463619   47885 provision.go:83] configureAuth start
	I1207 21:07:30.463632   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.463881   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:30.466509   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.466835   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.466865   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.467040   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.469115   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469452   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.469481   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469647   47885 provision.go:138] copyHostCerts
	I1207 21:07:30.469711   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:30.469721   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:30.469771   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:30.469843   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:30.469851   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:30.469874   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:30.469930   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:30.469944   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:30.469968   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:30.470050   47885 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.pause-763966 san=[192.168.39.237 192.168.39.237 localhost 127.0.0.1 minikube pause-763966]
	I1207 21:07:30.624834   47885 provision.go:172] copyRemoteCerts
	I1207 21:07:30.624904   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:30.624932   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.627807   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628175   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.628216   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628466   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.628663   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.628852   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.629015   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:30.721413   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:30.750553   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1207 21:07:30.776230   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:07:30.808007   47885 provision.go:86] duration metric: configureAuth took 344.374986ms
	I1207 21:07:30.808031   47885 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:30.808223   47885 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:07:30.808312   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.811071   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811380   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.811415   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811554   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.811747   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.811950   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.812083   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.812250   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.812583   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.812600   47885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:37.404458   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:37.404490   47885 machine.go:91] provisioned docker machine in 7.213270016s
	I1207 21:07:37.404503   47885 start.go:300] post-start starting for "pause-763966" (driver="kvm2")
	I1207 21:07:37.404515   47885 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:37.404540   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.404909   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:37.404940   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.407902   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.408368   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408509   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.408711   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.408837   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.408970   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.521762   47885 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:37.526220   47885 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:07:37.526247   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:37.526308   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:37.526416   47885 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:37.526541   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:37.539457   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:37.564052   47885 start.go:303] post-start completed in 159.537127ms
	I1207 21:07:37.564083   47885 fix.go:56] fixHost completed within 7.39656043s
	I1207 21:07:37.564102   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.567031   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567432   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.567462   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567631   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.567849   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568032   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568206   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.568384   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:37.568686   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:37.568707   47885 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1207 21:07:37.686682   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983257.682984288
	
	I1207 21:07:37.686707   47885 fix.go:206] guest clock: 1701983257.682984288
	I1207 21:07:37.686716   47885 fix.go:219] Guest: 2023-12-07 21:07:37.682984288 +0000 UTC Remote: 2023-12-07 21:07:37.564087197 +0000 UTC m=+55.882358893 (delta=118.897091ms)
	I1207 21:07:37.686771   47885 fix.go:190] guest clock delta is within tolerance: 118.897091ms
	I1207 21:07:37.686780   47885 start.go:83] releasing machines lock for "pause-763966", held for 7.51930022s
	I1207 21:07:37.686812   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.687086   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:37.689968   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690410   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.690448   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690593   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691097   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691281   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691389   47885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:37.691429   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.691532   47885 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:37.691558   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.694652   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.694973   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695096   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695128   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695319   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695451   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695488   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695541   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.695756   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.695922   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695930   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.696478   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.696672   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.696848   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.824802   47885 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:37.833573   47885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:37.992042   47885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:37.998690   47885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:37.998764   47885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:38.008789   47885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 21:07:38.008817   47885 start.go:475] detecting cgroup driver to use...
	I1207 21:07:38.008903   47885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:38.029726   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:38.045392   47885 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:38.045453   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:38.061788   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:38.077501   47885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:07:38.230276   47885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:38.929441   47885 docker.go:219] disabling docker service ...
	I1207 21:07:38.929533   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:38.972952   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:39.000065   47885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:39.365500   47885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:39.657590   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:39.734261   47885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:39.833606   47885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:07:39.833681   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.870335   47885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:07:39.870417   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.901831   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.928228   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.952330   47885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:07:39.972481   47885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:07:39.987141   47885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:07:40.003730   47885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:07:40.274754   47885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:07:41.974747   47885 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.699950937s)
	I1207 21:07:41.974779   47885 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:07:41.974832   47885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:07:41.981723   47885 start.go:543] Will wait 60s for crictl version
	I1207 21:07:41.981786   47885 ssh_runner.go:195] Run: which crictl
	I1207 21:07:41.987013   47885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:07:42.050779   47885 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:07:42.050904   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.110899   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.164304   47885 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:07:42.165952   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:42.169388   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.169815   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:42.169842   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.170126   47885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:07:42.175657   47885 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:07:42.175717   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.234910   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.234943   47885 crio.go:415] Images already preloaded, skipping extraction
	I1207 21:07:42.235020   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.278372   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.278396   47885 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:07:42.278517   47885 ssh_runner.go:195] Run: crio config
	I1207 21:07:42.444519   47885 cni.go:84] Creating CNI manager for ""
	I1207 21:07:42.444554   47885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:07:42.444586   47885 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:07:42.444620   47885 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763966 NodeName:pause-763966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:07:42.444881   47885 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763966"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:07:42.445014   47885 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-763966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:07:42.445086   47885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:07:42.467395   47885 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:07:42.467487   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:07:42.511628   47885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 21:07:42.544396   47885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:07:42.591065   47885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1207 21:07:42.789598   47885 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I1207 21:07:42.825431   47885 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966 for IP: 192.168.39.237
	I1207 21:07:42.825474   47885 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:07:42.825656   47885 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:07:42.825713   47885 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:07:42.825819   47885 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/client.key
	I1207 21:07:42.825914   47885 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key.cf509944
	I1207 21:07:42.825992   47885 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key
	I1207 21:07:42.826146   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:07:42.826189   47885 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:07:42.826207   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:07:42.826244   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:07:42.826287   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:07:42.826320   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:07:42.826383   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:42.827247   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:07:42.902388   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:07:42.970133   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:07:43.015938   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:07:43.058300   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:07:43.137443   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:07:43.189174   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:07:43.240886   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:07:43.296335   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:07:43.350271   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:07:43.412610   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:07:43.475454   47885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:07:43.526819   47885 ssh_runner.go:195] Run: openssl version
	I1207 21:07:43.546515   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:07:43.561720   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570116   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570205   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.577494   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:07:43.587448   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:07:43.598484   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604317   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604420   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.611072   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:07:43.621498   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:07:43.636404   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645084   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645165   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.657188   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:07:43.672912   47885 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:07:43.681666   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:07:43.694094   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:07:43.705788   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:07:43.719218   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:07:43.732112   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:07:43.744493   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:07:43.764423   47885 kubeadm.go:404] StartCluster: {Name:pause-763966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:07:43.764573   47885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:07:43.764656   47885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:07:43.852222   47885 cri.go:89] found id: "a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022"
	I1207 21:07:43.852249   47885 cri.go:89] found id: "d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d"
	I1207 21:07:43.852259   47885 cri.go:89] found id: "085182fb95992bc23ed02f0be641f942c2f7195cdbc192e5d86f5c2e89beff27"
	I1207 21:07:43.852265   47885 cri.go:89] found id: "37d089b9fc205ebc244d160915340f06e87b5e3b59b75f3b316fb5e333bc21a6"
	I1207 21:07:43.852270   47885 cri.go:89] found id: "3eb4483e3db6fd79059095509f2360ce563cf446b08f2091f8add3d6aa59bd6b"
	I1207 21:07:43.852276   47885 cri.go:89] found id: "531a6b1cf0597b055a9600ccccdc9633c3470679ae44e383bdf594a3f7bb16b7"
	I1207 21:07:43.852282   47885 cri.go:89] found id: ""
	I1207 21:07:43.852335   47885 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-763966 -n pause-763966
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-763966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-763966 logs -n 25: (1.607616577s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat cri-docker                              |                        |         |         |                     |                     |
	|         | --no-pager                                            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | cri-dockerd --version                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl status containerd                           |                        |         |         |                     |                     |
	|         | --all --full --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat containerd                              |                        |         |         |                     |                     |
	|         | --no-pager                                            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service                |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/containerd/config.toml                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | containerd config dump                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl status crio --all                           |                        |         |         |                     |                     |
	|         | --full --no-pager                                     |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo find                            | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo crio                            | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | config                                                |                        |         |         |                     |                     |
	| delete  | -p cilium-715748                                      | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC | 07 Dec 23 21:05 UTC |
	| start   | -p old-k8s-version-483745                             | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                        |         |         |                     |                     |
	|         | --kvm-network=default                                 |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                        |         |         |                     |                     |
	|         | --keep-context=false                                  |                        |         |         |                     |                     |
	|         | --driver=kvm2                                         |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                          |                        |         |         |                     |                     |
	| start   | -p stopped-upgrade-099448                             | stopped-upgrade-099448 | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr                                     |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	| ssh     | cert-options-620116 ssh                               | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | openssl x509 -text -noout -in                         |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                        |         |         |                     |                     |
	| ssh     | -p cert-options-620116 -- sudo                        | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                        |         |         |                     |                     |
	| delete  | -p cert-options-620116                                | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                  | no-preload-950431      | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC |                     |
	|         | --memory=2200 --alsologtostderr                       |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                           |                        |         |         |                     |                     |
	|         | --driver=kvm2                                         |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                     |                        |         |         |                     |                     |
	| start   | -p pause-763966                                       | pause-763966           | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                     |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                             | stopped-upgrade-099448 | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                 | embed-certs-598346     | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC |                     |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                          |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745       | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                             | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                |                        |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:07:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:07:04.157683   48213 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:07:04.158063   48213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:07:04.158075   48213 out.go:309] Setting ErrFile to fd 2...
	I1207 21:07:04.158082   48213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:07:04.158349   48213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:07:04.159166   48213 out.go:303] Setting JSON to false
	I1207 21:07:04.160409   48213 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6570,"bootTime":1701976654,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:07:04.160491   48213 start.go:138] virtualization: kvm guest
	I1207 21:07:04.163051   48213 out.go:177] * [embed-certs-598346] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:07:04.164721   48213 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:07:04.164735   48213 notify.go:220] Checking for updates...
	I1207 21:07:04.166228   48213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:07:04.167848   48213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:07:04.169308   48213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:04.170875   48213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:07:04.172340   48213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:07:04.174340   48213 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:07:04.174453   48213 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:07:04.174587   48213 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:07:04.174677   48213 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:07:04.213520   48213 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:07:04.214791   48213 start.go:298] selected driver: kvm2
	I1207 21:07:04.214805   48213 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:07:04.214816   48213 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:07:04.215568   48213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:07:04.215652   48213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:07:04.231808   48213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:07:04.231847   48213 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 21:07:04.232086   48213 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:07:04.232148   48213 cni.go:84] Creating CNI manager for ""
	I1207 21:07:04.232165   48213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:07:04.232185   48213 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 21:07:04.232195   48213 start_flags.go:323] config:
	{Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:07:04.232404   48213 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:07:04.234355   48213 out.go:177] * Starting control plane node embed-certs-598346 in cluster embed-certs-598346
	I1207 21:07:01.601645   47677 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 21:07:01.601802   47677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:01.601849   47677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:01.618878   47677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I1207 21:07:01.619289   47677 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:01.619811   47677 main.go:141] libmachine: Using API Version  1
	I1207 21:07:01.619850   47677 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:01.620192   47677 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:01.620415   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:01.620584   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:01.620761   47677 start.go:159] libmachine.API.Create for "no-preload-950431" (driver="kvm2")
	I1207 21:07:01.620789   47677 client.go:168] LocalClient.Create starting
	I1207 21:07:01.620820   47677 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:07:01.620858   47677 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:01.620887   47677 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:01.620955   47677 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:07:01.620985   47677 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:01.621010   47677 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:01.621041   47677 main.go:141] libmachine: Running pre-create checks...
	I1207 21:07:01.621055   47677 main.go:141] libmachine: (no-preload-950431) Calling .PreCreateCheck
	I1207 21:07:01.621368   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:01.621772   47677 main.go:141] libmachine: Creating machine...
	I1207 21:07:01.621785   47677 main.go:141] libmachine: (no-preload-950431) Calling .Create
	I1207 21:07:01.621909   47677 main.go:141] libmachine: (no-preload-950431) Creating KVM machine...
	I1207 21:07:01.623049   47677 main.go:141] libmachine: (no-preload-950431) DBG | found existing default KVM network
	I1207 21:07:01.624314   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.624141   47999 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:ad:58} reservation:<nil>}
	I1207 21:07:01.625488   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.625366   47999 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ac720}
	I1207 21:07:01.631356   47677 main.go:141] libmachine: (no-preload-950431) DBG | trying to create private KVM network mk-no-preload-950431 192.168.50.0/24...
	I1207 21:07:01.705010   47677 main.go:141] libmachine: (no-preload-950431) DBG | private KVM network mk-no-preload-950431 192.168.50.0/24 created
	I1207 21:07:01.705057   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.704972   47999 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:01.705077   47677 main.go:141] libmachine: (no-preload-950431) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 ...
	I1207 21:07:01.705094   47677 main.go:141] libmachine: (no-preload-950431) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:07:01.705123   47677 main.go:141] libmachine: (no-preload-950431) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:07:01.917863   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.917745   47999 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa...
	I1207 21:07:02.023714   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:02.023601   47999 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/no-preload-950431.rawdisk...
	I1207 21:07:02.023756   47677 main.go:141] libmachine: (no-preload-950431) DBG | Writing magic tar header
	I1207 21:07:02.023779   47677 main.go:141] libmachine: (no-preload-950431) DBG | Writing SSH key tar header
	I1207 21:07:02.023794   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:02.023746   47999 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 ...
	I1207 21:07:02.023914   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431
	I1207 21:07:02.023956   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:07:02.023972   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 (perms=drwx------)
	I1207 21:07:02.023993   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:07:02.024009   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:07:02.024032   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:07:02.024054   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:07:02.024069   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:02.024089   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:07:02.024105   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:07:02.024121   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:07:02.024138   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home
	I1207 21:07:02.024156   47677 main.go:141] libmachine: (no-preload-950431) DBG | Skipping /home - not owner
	I1207 21:07:02.024170   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:07:02.024185   47677 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:07:02.025314   47677 main.go:141] libmachine: (no-preload-950431) define libvirt domain using xml: 
	I1207 21:07:02.025341   47677 main.go:141] libmachine: (no-preload-950431) <domain type='kvm'>
	I1207 21:07:02.025383   47677 main.go:141] libmachine: (no-preload-950431)   <name>no-preload-950431</name>
	I1207 21:07:02.025423   47677 main.go:141] libmachine: (no-preload-950431)   <memory unit='MiB'>2200</memory>
	I1207 21:07:02.025437   47677 main.go:141] libmachine: (no-preload-950431)   <vcpu>2</vcpu>
	I1207 21:07:02.025448   47677 main.go:141] libmachine: (no-preload-950431)   <features>
	I1207 21:07:02.025459   47677 main.go:141] libmachine: (no-preload-950431)     <acpi/>
	I1207 21:07:02.025471   47677 main.go:141] libmachine: (no-preload-950431)     <apic/>
	I1207 21:07:02.025480   47677 main.go:141] libmachine: (no-preload-950431)     <pae/>
	I1207 21:07:02.025491   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.025543   47677 main.go:141] libmachine: (no-preload-950431)   </features>
	I1207 21:07:02.025561   47677 main.go:141] libmachine: (no-preload-950431)   <cpu mode='host-passthrough'>
	I1207 21:07:02.025568   47677 main.go:141] libmachine: (no-preload-950431)   
	I1207 21:07:02.025576   47677 main.go:141] libmachine: (no-preload-950431)   </cpu>
	I1207 21:07:02.025592   47677 main.go:141] libmachine: (no-preload-950431)   <os>
	I1207 21:07:02.025601   47677 main.go:141] libmachine: (no-preload-950431)     <type>hvm</type>
	I1207 21:07:02.025608   47677 main.go:141] libmachine: (no-preload-950431)     <boot dev='cdrom'/>
	I1207 21:07:02.025616   47677 main.go:141] libmachine: (no-preload-950431)     <boot dev='hd'/>
	I1207 21:07:02.025627   47677 main.go:141] libmachine: (no-preload-950431)     <bootmenu enable='no'/>
	I1207 21:07:02.025648   47677 main.go:141] libmachine: (no-preload-950431)   </os>
	I1207 21:07:02.025662   47677 main.go:141] libmachine: (no-preload-950431)   <devices>
	I1207 21:07:02.025672   47677 main.go:141] libmachine: (no-preload-950431)     <disk type='file' device='cdrom'>
	I1207 21:07:02.025689   47677 main.go:141] libmachine: (no-preload-950431)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/boot2docker.iso'/>
	I1207 21:07:02.025702   47677 main.go:141] libmachine: (no-preload-950431)       <target dev='hdc' bus='scsi'/>
	I1207 21:07:02.025714   47677 main.go:141] libmachine: (no-preload-950431)       <readonly/>
	I1207 21:07:02.025729   47677 main.go:141] libmachine: (no-preload-950431)     </disk>
	I1207 21:07:02.025744   47677 main.go:141] libmachine: (no-preload-950431)     <disk type='file' device='disk'>
	I1207 21:07:02.025757   47677 main.go:141] libmachine: (no-preload-950431)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:07:02.025795   47677 main.go:141] libmachine: (no-preload-950431)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/no-preload-950431.rawdisk'/>
	I1207 21:07:02.025812   47677 main.go:141] libmachine: (no-preload-950431)       <target dev='hda' bus='virtio'/>
	I1207 21:07:02.025822   47677 main.go:141] libmachine: (no-preload-950431)     </disk>
	I1207 21:07:02.025829   47677 main.go:141] libmachine: (no-preload-950431)     <interface type='network'>
	I1207 21:07:02.025842   47677 main.go:141] libmachine: (no-preload-950431)       <source network='mk-no-preload-950431'/>
	I1207 21:07:02.025855   47677 main.go:141] libmachine: (no-preload-950431)       <model type='virtio'/>
	I1207 21:07:02.025877   47677 main.go:141] libmachine: (no-preload-950431)     </interface>
	I1207 21:07:02.025896   47677 main.go:141] libmachine: (no-preload-950431)     <interface type='network'>
	I1207 21:07:02.025906   47677 main.go:141] libmachine: (no-preload-950431)       <source network='default'/>
	I1207 21:07:02.025911   47677 main.go:141] libmachine: (no-preload-950431)       <model type='virtio'/>
	I1207 21:07:02.025933   47677 main.go:141] libmachine: (no-preload-950431)     </interface>
	I1207 21:07:02.025950   47677 main.go:141] libmachine: (no-preload-950431)     <serial type='pty'>
	I1207 21:07:02.025972   47677 main.go:141] libmachine: (no-preload-950431)       <target port='0'/>
	I1207 21:07:02.025991   47677 main.go:141] libmachine: (no-preload-950431)     </serial>
	I1207 21:07:02.026004   47677 main.go:141] libmachine: (no-preload-950431)     <console type='pty'>
	I1207 21:07:02.026017   47677 main.go:141] libmachine: (no-preload-950431)       <target type='serial' port='0'/>
	I1207 21:07:02.026034   47677 main.go:141] libmachine: (no-preload-950431)     </console>
	I1207 21:07:02.026050   47677 main.go:141] libmachine: (no-preload-950431)     <rng model='virtio'>
	I1207 21:07:02.026065   47677 main.go:141] libmachine: (no-preload-950431)       <backend model='random'>/dev/random</backend>
	I1207 21:07:02.026077   47677 main.go:141] libmachine: (no-preload-950431)     </rng>
	I1207 21:07:02.026089   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.026106   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.026119   47677 main.go:141] libmachine: (no-preload-950431)   </devices>
	I1207 21:07:02.026134   47677 main.go:141] libmachine: (no-preload-950431) </domain>
	I1207 21:07:02.026149   47677 main.go:141] libmachine: (no-preload-950431) 
	I1207 21:07:02.030791   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:c3:dc:38 in network default
	I1207 21:07:02.031366   47677 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:07:02.031400   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:02.032131   47677 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:07:02.032431   47677 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:07:02.033040   47677 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:07:02.033819   47677 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:07:03.695080   47677 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:07:03.696019   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:03.696557   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:03.696586   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:03.696531   47999 retry.go:31] will retry after 310.733444ms: waiting for machine to come up
	I1207 21:07:04.008957   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.009459   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.009490   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.009432   47999 retry.go:31] will retry after 321.879279ms: waiting for machine to come up
	I1207 21:07:04.334271   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.334755   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.334784   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.334717   47999 retry.go:31] will retry after 378.524792ms: waiting for machine to come up
	I1207 21:07:04.715210   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.715782   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.715810   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.715719   47999 retry.go:31] will retry after 389.607351ms: waiting for machine to come up
	I1207 21:07:04.192647   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:06.691664   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:08.692066   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:04.235630   48213 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:07:04.235662   48213 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:07:04.235668   48213 cache.go:56] Caching tarball of preloaded images
	I1207 21:07:04.235750   48213 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:07:04.235763   48213 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:07:04.235850   48213 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:07:04.235888   48213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json: {Name:mk6253fc7de4a52e34595793c259307458a0de3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:07:04.236050   48213 start.go:365] acquiring machines lock for embed-certs-598346: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:07:05.107351   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:05.107866   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:05.107896   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:05.107830   47999 retry.go:31] will retry after 680.922555ms: waiting for machine to come up
	I1207 21:07:05.790667   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:05.791196   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:05.791256   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:05.791131   47999 retry.go:31] will retry after 773.589238ms: waiting for machine to come up
	I1207 21:07:06.565801   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:06.566216   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:06.566245   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:06.566161   47999 retry.go:31] will retry after 1.172647624s: waiting for machine to come up
	I1207 21:07:07.740835   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:07.741251   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:07.741274   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:07.741213   47999 retry.go:31] will retry after 1.281716702s: waiting for machine to come up
	I1207 21:07:09.024381   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:09.024894   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:09.024920   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:09.024848   47999 retry.go:31] will retry after 1.3476333s: waiting for machine to come up
	I1207 21:07:10.693386   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:13.193600   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:10.374187   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:10.374745   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:10.374776   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:10.374692   47999 retry.go:31] will retry after 1.507121871s: waiting for machine to come up
	I1207 21:07:11.883107   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:11.883625   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:11.883656   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:11.883567   47999 retry.go:31] will retry after 1.85350099s: waiting for machine to come up
	I1207 21:07:13.739119   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:13.739620   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:13.739655   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:13.739572   47999 retry.go:31] will retry after 3.34155315s: waiting for machine to come up
	I1207 21:07:15.692450   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:17.692705   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:17.082837   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:17.083289   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:17.083320   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:17.083237   47999 retry.go:31] will retry after 3.305771578s: waiting for machine to come up
	I1207 21:07:20.192134   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:22.192823   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:20.392762   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:20.393285   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:20.393307   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:20.393261   47999 retry.go:31] will retry after 5.192401612s: waiting for machine to come up
	I1207 21:07:24.691247   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:27.191865   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:25.586975   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.587454   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.587475   47677 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:07:25.587488   47677 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:07:25.587769   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431
	I1207 21:07:25.668963   47677 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:07:25.668993   47677 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:07:25.669017   47677 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:07:25.671914   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.672296   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431
	I1207 21:07:25.672327   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find defined IP address of network mk-no-preload-950431 interface with MAC address 52:54:00:80:97:8f
	I1207 21:07:25.672450   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:07:25.672483   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:07:25.672526   47677 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:07:25.672541   47677 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:07:25.672558   47677 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:07:25.676079   47677 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: exit status 255: 
	I1207 21:07:25.676107   47677 main.go:141] libmachine: (no-preload-950431) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1207 21:07:25.676119   47677 main.go:141] libmachine: (no-preload-950431) DBG | command : exit 0
	I1207 21:07:25.676138   47677 main.go:141] libmachine: (no-preload-950431) DBG | err     : exit status 255
	I1207 21:07:25.676185   47677 main.go:141] libmachine: (no-preload-950431) DBG | output  : 
	I1207 21:07:28.676733   47677 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:07:28.679340   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.679648   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.679677   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.679754   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:07:28.679787   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:07:28.679835   47677 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:07:28.679846   47677 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:07:28.679859   47677 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:07:28.765563   47677 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:07:28.765828   47677 main.go:141] libmachine: (no-preload-950431) KVM machine creation complete!
	I1207 21:07:28.766119   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:28.766612   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:28.766771   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:28.766928   47677 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 21:07:28.766946   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:07:28.768119   47677 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 21:07:28.768132   47677 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 21:07:28.768138   47677 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 21:07:28.768152   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:28.770474   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.770784   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.770817   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.770969   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:28.771149   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.771326   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.771489   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:28.771649   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:28.772101   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:28.772117   47677 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 21:07:28.881201   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:28.881229   47677 main.go:141] libmachine: Detecting the provisioner...
	I1207 21:07:28.881240   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:28.884077   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.884437   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.884467   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.884693   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:28.884876   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.885051   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.885202   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:28.885392   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:28.885846   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:28.885865   47677 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 21:07:29.002722   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 21:07:29.002806   47677 main.go:141] libmachine: found compatible host: buildroot
	I1207 21:07:29.002828   47677 main.go:141] libmachine: Provisioning with buildroot...
	I1207 21:07:29.002839   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.003122   47677 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:07:29.003161   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.003405   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.006420   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.007128   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.007173   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.007469   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.007832   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.008145   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.008487   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.008803   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.009550   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.009591   47677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:07:29.130183   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:07:29.130213   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.132925   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.133251   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.133284   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.133436   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.133606   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.133761   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.133872   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.134060   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.134453   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.134473   47677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:29.257609   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:29.257632   47677 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:29.257647   47677 buildroot.go:174] setting up certificates
	I1207 21:07:29.257657   47677 provision.go:83] configureAuth start
	I1207 21:07:29.257665   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.257954   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:29.260827   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.261273   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.261299   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.261581   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.263670   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.264076   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.264109   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.264234   47677 provision.go:138] copyHostCerts
	I1207 21:07:29.264302   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:29.264314   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:29.264384   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:29.264493   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:29.264507   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:29.264541   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:29.264621   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:29.264633   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:29.264664   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:29.264736   47677 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:07:29.438372   47677 provision.go:172] copyRemoteCerts
	I1207 21:07:29.438436   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:29.438458   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.441383   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.441847   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.441895   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.443278   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.443489   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.443663   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.443813   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:29.529376   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:29.555868   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:07:29.579234   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:07:29.602071   47677 provision.go:86] duration metric: configureAuth took 344.401753ms
	I1207 21:07:29.602101   47677 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:29.602274   47677 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:07:29.602343   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.604813   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.605209   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.605236   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.605414   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.605613   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.605771   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.605907   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.606059   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.606384   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.606418   47677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:30.167457   47885 start.go:369] acquired machines lock for "pause-763966" in 48.299920182s
	I1207 21:07:30.167512   47885 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:07:30.167523   47885 fix.go:54] fixHost starting: 
	I1207 21:07:30.167890   47885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:30.167939   47885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:30.184020   47885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1207 21:07:30.184435   47885 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:30.184906   47885 main.go:141] libmachine: Using API Version  1
	I1207 21:07:30.184935   47885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:30.185309   47885 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:30.185514   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.185686   47885 main.go:141] libmachine: (pause-763966) Calling .GetState
	I1207 21:07:30.187354   47885 fix.go:102] recreateIfNeeded on pause-763966: state=Running err=<nil>
	W1207 21:07:30.187390   47885 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:07:30.189755   47885 out.go:177] * Updating the running kvm2 "pause-763966" VM ...
	I1207 21:07:29.918574   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:29.918602   47677 main.go:141] libmachine: Checking connection to Docker...
	I1207 21:07:29.918613   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetURL
	I1207 21:07:29.919981   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using libvirt version 6000000
	I1207 21:07:29.922407   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.922737   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.922777   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.922894   47677 main.go:141] libmachine: Docker is up and running!
	I1207 21:07:29.922914   47677 main.go:141] libmachine: Reticulating splines...
	I1207 21:07:29.922922   47677 client.go:171] LocalClient.Create took 28.302122475s
	I1207 21:07:29.922945   47677 start.go:167] duration metric: libmachine.API.Create for "no-preload-950431" took 28.302190846s
	I1207 21:07:29.922964   47677 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:07:29.922978   47677 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:29.922995   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:29.923268   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:29.923289   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.925456   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.925795   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.925834   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.925997   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.926164   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.926312   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.926438   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.011437   47677 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:30.015959   47677 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:07:30.015985   47677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:30.016052   47677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:30.016170   47677 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:30.016275   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:30.024420   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:30.047964   47677 start.go:303] post-start completed in 124.984366ms
	I1207 21:07:30.048018   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:30.048571   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:30.051216   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.051566   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.051597   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.051813   47677 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:07:30.052040   47677 start.go:128] duration metric: createHost completed in 28.45337169s
	I1207 21:07:30.052068   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.054374   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.054599   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.054621   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.054722   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.054890   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.055085   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.055211   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.055343   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.055655   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:30.055672   47677 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:07:30.167307   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983250.155503456
	
	I1207 21:07:30.167331   47677 fix.go:206] guest clock: 1701983250.155503456
	I1207 21:07:30.167338   47677 fix.go:219] Guest: 2023-12-07 21:07:30.155503456 +0000 UTC Remote: 2023-12-07 21:07:30.052054396 +0000 UTC m=+75.310239283 (delta=103.44906ms)
	I1207 21:07:30.167375   47677 fix.go:190] guest clock delta is within tolerance: 103.44906ms
	I1207 21:07:30.167379   47677 start.go:83] releasing machines lock for "no-preload-950431", held for 28.568876733s
	I1207 21:07:30.167411   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.167744   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:30.170601   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.171006   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.171039   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.171165   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171686   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171883   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171968   47677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:30.172009   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.172086   47677 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:30.172110   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.174582   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.174899   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.174925   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175056   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175110   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.175286   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.175446   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.175470   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.175502   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175587   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.175659   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.175796   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.175957   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.176093   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.258701   47677 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:30.283727   47677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:30.440875   47677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:30.447387   47677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:30.447459   47677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:30.462448   47677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:07:30.462470   47677 start.go:475] detecting cgroup driver to use...
	I1207 21:07:30.462550   47677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:30.477803   47677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:30.489963   47677 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:30.490019   47677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:30.502404   47677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:30.515339   47677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:07:30.628615   47677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:30.767827   47677 docker.go:219] disabling docker service ...
	I1207 21:07:30.767885   47677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:30.784387   47677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:30.800947   47677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:30.905850   47677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:31.010460   47677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:31.024659   47677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:31.044132   47677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:07:31.044186   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.055525   47677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:07:31.055604   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.067056   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.078086   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.089451   47677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:07:31.101580   47677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:07:31.111926   47677 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:07:31.112000   47677 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:07:31.127475   47677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:07:31.138087   47677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:07:31.247704   47677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:07:31.419666   47677 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:07:31.419750   47677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:07:31.424475   47677 start.go:543] Will wait 60s for crictl version
	I1207 21:07:31.424528   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:31.428179   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:07:31.467873   47677 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:07:31.467947   47677 ssh_runner.go:195] Run: crio --version
	I1207 21:07:31.513224   47677 ssh_runner.go:195] Run: crio --version
	I1207 21:07:31.566059   47677 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:07:30.191196   47885 machine.go:88] provisioning docker machine ...
	I1207 21:07:30.191218   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.191434   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191591   47885 buildroot.go:166] provisioning hostname "pause-763966"
	I1207 21:07:30.191615   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191775   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.194611   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195060   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.195087   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195229   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.195414   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195577   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195700   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.195847   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.196172   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.196186   47885 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-763966 && echo "pause-763966" | sudo tee /etc/hostname
	I1207 21:07:30.339851   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-763966
	
	I1207 21:07:30.339883   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.342876   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.343366   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343576   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.343772   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.343982   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.344187   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.344380   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.344864   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.344891   47885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763966/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:30.463538   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:30.463567   47885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:30.463609   47885 buildroot.go:174] setting up certificates
	I1207 21:07:30.463619   47885 provision.go:83] configureAuth start
	I1207 21:07:30.463632   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.463881   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:30.466509   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.466835   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.466865   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.467040   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.469115   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469452   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.469481   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469647   47885 provision.go:138] copyHostCerts
	I1207 21:07:30.469711   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:30.469721   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:30.469771   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:30.469843   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:30.469851   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:30.469874   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:30.469930   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:30.469944   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:30.469968   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:30.470050   47885 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.pause-763966 san=[192.168.39.237 192.168.39.237 localhost 127.0.0.1 minikube pause-763966]
	I1207 21:07:30.624834   47885 provision.go:172] copyRemoteCerts
	I1207 21:07:30.624904   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:30.624932   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.627807   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628175   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.628216   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628466   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.628663   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.628852   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.629015   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:30.721413   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:30.750553   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1207 21:07:30.776230   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:07:30.808007   47885 provision.go:86] duration metric: configureAuth took 344.374986ms
	I1207 21:07:30.808031   47885 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:30.808223   47885 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:07:30.808312   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.811071   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811380   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.811415   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811554   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.811747   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.811950   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.812083   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.812250   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.812583   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.812600   47885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:29.194325   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:31.691847   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:33.691940   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:31.567476   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:31.570153   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:31.570440   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:31.570470   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:31.570582   47677 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:07:31.574572   47677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:07:31.586889   47677 localpath.go:92] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.crt -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt
	I1207 21:07:31.587012   47677 localpath.go:117] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.key -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:07:31.587105   47677 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:07:31.587135   47677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:31.619383   47677 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:07:31.619411   47677 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:07:31.619462   47677 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:31.619486   47677 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.619521   47677 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:31.619535   47677 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.619586   47677 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.619639   47677 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.619592   47677 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.619610   47677 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:07:31.620572   47677 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.620614   47677 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.620617   47677 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:31.620619   47677 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:31.620551   47677 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.835253   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.869555   47677 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:07:31.869606   47677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.869660   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:31.873415   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.886860   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:07:31.893746   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.897844   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.898663   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.899840   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.913889   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:07:31.913995   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:31.941645   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.014267   47677 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1207 21:07:32.014311   47677 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1207 21:07:32.014358   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052690   47677 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:07:32.052728   47677 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:07:32.052727   47677 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:07:32.052786   47677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:32.052805   47677 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:07:32.052836   47677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:32.052849   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052732   47677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:32.052889   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052747   47677 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:32.052909   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.052929   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (28359680 bytes)
	I1207 21:07:32.052931   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052897   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.062700   47677 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:07:32.062739   47677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.062740   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1207 21:07:32.062776   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.075658   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:32.075754   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:32.075791   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:32.075833   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:32.236529   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:07:32.236577   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:32.236634   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:07:32.236658   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:32.236724   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:32.236636   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:07:32.240172   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1207 21:07:32.240228   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:32.240247   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.240278   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.240308   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:32.263484   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I1207 21:07:32.263514   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I1207 21:07:32.284181   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.10-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.10-0': No such file or directory
	I1207 21:07:32.284243   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.284275   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (35069952 bytes)
	I1207 21:07:32.284271   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 --> /var/lib/minikube/images/etcd_3.5.10-0 (56657408 bytes)
	I1207 21:07:32.334812   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:32.334871   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.334883   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I1207 21:07:32.334899   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (18522624 bytes)
	I1207 21:07:32.334912   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I1207 21:07:32.334914   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:32.402053   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.402085   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (33436672 bytes)
	I1207 21:07:32.458364   47677 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.458438   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.502623   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:33.278051   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I1207 21:07:33.278102   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:33.278153   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:33.278158   47677 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:07:33.278204   47677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:33.278252   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:37.686863   48213 start.go:369] acquired machines lock for "embed-certs-598346" in 33.450781604s
	I1207 21:07:37.686928   48213 start.go:93] Provisioning new machine with config: &{Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:07:37.687076   48213 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 21:07:35.694572   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:37.238382   46932 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:07:37.238411   46932 pod_ready.go:81] duration metric: took 41.566346071s waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.238424   46932 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.246659   46932 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:07:37.246687   46932 pod_ready.go:81] duration metric: took 8.2552ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.246698   46932 pod_ready.go:38] duration metric: took 41.579183632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:07:37.246716   46932 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:07:37.246769   46932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:07:37.262341   46932 api_server.go:72] duration metric: took 41.906320635s to wait for apiserver process to appear ...
	I1207 21:07:37.262368   46932 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:07:37.262386   46932 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:07:37.270080   46932 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:07:37.271209   46932 api_server.go:141] control plane version: v1.16.0
	I1207 21:07:37.271233   46932 api_server.go:131] duration metric: took 8.858171ms to wait for apiserver health ...
	I1207 21:07:37.271244   46932 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:07:37.275404   46932 system_pods.go:59] 3 kube-system pods found
	I1207 21:07:37.275438   46932 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.275446   46932 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.275453   46932 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.275461   46932 system_pods.go:74] duration metric: took 4.210053ms to wait for pod list to return data ...
	I1207 21:07:37.275469   46932 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:07:37.278487   46932 default_sa.go:45] found service account: "default"
	I1207 21:07:37.278513   46932 default_sa.go:55] duration metric: took 3.038168ms for default service account to be created ...
	I1207 21:07:37.278520   46932 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:07:37.284484   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.284521   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.284530   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.284536   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.284555   46932 retry.go:31] will retry after 296.348393ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.585710   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.585742   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.585750   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.585756   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.585775   46932 retry.go:31] will retry after 323.000686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.913762   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.913793   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.913800   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.913805   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.913822   46932 retry.go:31] will retry after 382.501661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:38.306840   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:38.306874   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:38.306882   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:38.306888   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:38.306904   46932 retry.go:31] will retry after 413.279764ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.689202   48213 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 21:07:37.689404   48213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:37.689454   48213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:37.706261   48213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I1207 21:07:37.706731   48213 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:37.707372   48213 main.go:141] libmachine: Using API Version  1
	I1207 21:07:37.707395   48213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:37.707735   48213 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:37.707925   48213 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:07:37.708081   48213 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:07:37.708337   48213 start.go:159] libmachine.API.Create for "embed-certs-598346" (driver="kvm2")
	I1207 21:07:37.708369   48213 client.go:168] LocalClient.Create starting
	I1207 21:07:37.708405   48213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:07:37.708440   48213 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:37.708471   48213 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:37.708540   48213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:07:37.708565   48213 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:37.708585   48213 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:37.708611   48213 main.go:141] libmachine: Running pre-create checks...
	I1207 21:07:37.708624   48213 main.go:141] libmachine: (embed-certs-598346) Calling .PreCreateCheck
	I1207 21:07:37.709037   48213 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:07:37.709479   48213 main.go:141] libmachine: Creating machine...
	I1207 21:07:37.709495   48213 main.go:141] libmachine: (embed-certs-598346) Calling .Create
	I1207 21:07:37.709630   48213 main.go:141] libmachine: (embed-certs-598346) Creating KVM machine...
	I1207 21:07:37.710891   48213 main.go:141] libmachine: (embed-certs-598346) DBG | found existing default KVM network
	I1207 21:07:37.712266   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.712086   48402 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:ad:58} reservation:<nil>}
	I1207 21:07:37.713492   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.713415   48402 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:c6:79} reservation:<nil>}
	I1207 21:07:37.714600   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.714510   48402 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:c3:ba} reservation:<nil>}
	I1207 21:07:37.716014   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.715933   48402 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000387150}
	I1207 21:07:37.722306   48213 main.go:141] libmachine: (embed-certs-598346) DBG | trying to create private KVM network mk-embed-certs-598346 192.168.72.0/24...
	I1207 21:07:37.814681   48213 main.go:141] libmachine: (embed-certs-598346) DBG | private KVM network mk-embed-certs-598346 192.168.72.0/24 created
	I1207 21:07:37.814731   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.814634   48402 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:37.814754   48213 main.go:141] libmachine: (embed-certs-598346) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 ...
	I1207 21:07:37.814767   48213 main.go:141] libmachine: (embed-certs-598346) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:07:37.814790   48213 main.go:141] libmachine: (embed-certs-598346) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:07:38.054086   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.053910   48402 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa...
	I1207 21:07:38.281828   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.281679   48402 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/embed-certs-598346.rawdisk...
	I1207 21:07:38.281861   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Writing magic tar header
	I1207 21:07:38.281897   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Writing SSH key tar header
	I1207 21:07:38.282428   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.282343   48402 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 ...
	I1207 21:07:38.282539   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346
	I1207 21:07:38.282574   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:07:38.282602   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:38.282623   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:07:38.282638   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:07:38.282655   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:07:38.282668   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home
	I1207 21:07:38.282684   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 (perms=drwx------)
	I1207 21:07:38.282702   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:07:38.282718   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:07:38.282735   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:07:38.282753   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:07:38.282774   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:07:38.282783   48213 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:07:38.282796   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Skipping /home - not owner
	I1207 21:07:38.283785   48213 main.go:141] libmachine: (embed-certs-598346) define libvirt domain using xml: 
	I1207 21:07:38.283807   48213 main.go:141] libmachine: (embed-certs-598346) <domain type='kvm'>
	I1207 21:07:38.283818   48213 main.go:141] libmachine: (embed-certs-598346)   <name>embed-certs-598346</name>
	I1207 21:07:38.283843   48213 main.go:141] libmachine: (embed-certs-598346)   <memory unit='MiB'>2200</memory>
	I1207 21:07:38.283859   48213 main.go:141] libmachine: (embed-certs-598346)   <vcpu>2</vcpu>
	I1207 21:07:38.283877   48213 main.go:141] libmachine: (embed-certs-598346)   <features>
	I1207 21:07:38.283892   48213 main.go:141] libmachine: (embed-certs-598346)     <acpi/>
	I1207 21:07:38.283904   48213 main.go:141] libmachine: (embed-certs-598346)     <apic/>
	I1207 21:07:38.283943   48213 main.go:141] libmachine: (embed-certs-598346)     <pae/>
	I1207 21:07:38.283966   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.283986   48213 main.go:141] libmachine: (embed-certs-598346)   </features>
	I1207 21:07:38.284004   48213 main.go:141] libmachine: (embed-certs-598346)   <cpu mode='host-passthrough'>
	I1207 21:07:38.284033   48213 main.go:141] libmachine: (embed-certs-598346)   
	I1207 21:07:38.284053   48213 main.go:141] libmachine: (embed-certs-598346)   </cpu>
	I1207 21:07:38.284067   48213 main.go:141] libmachine: (embed-certs-598346)   <os>
	I1207 21:07:38.284079   48213 main.go:141] libmachine: (embed-certs-598346)     <type>hvm</type>
	I1207 21:07:38.284092   48213 main.go:141] libmachine: (embed-certs-598346)     <boot dev='cdrom'/>
	I1207 21:07:38.284103   48213 main.go:141] libmachine: (embed-certs-598346)     <boot dev='hd'/>
	I1207 21:07:38.284117   48213 main.go:141] libmachine: (embed-certs-598346)     <bootmenu enable='no'/>
	I1207 21:07:38.284128   48213 main.go:141] libmachine: (embed-certs-598346)   </os>
	I1207 21:07:38.284158   48213 main.go:141] libmachine: (embed-certs-598346)   <devices>
	I1207 21:07:38.284192   48213 main.go:141] libmachine: (embed-certs-598346)     <disk type='file' device='cdrom'>
	I1207 21:07:38.284226   48213 main.go:141] libmachine: (embed-certs-598346)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/boot2docker.iso'/>
	I1207 21:07:38.284240   48213 main.go:141] libmachine: (embed-certs-598346)       <target dev='hdc' bus='scsi'/>
	I1207 21:07:38.284263   48213 main.go:141] libmachine: (embed-certs-598346)       <readonly/>
	I1207 21:07:38.284276   48213 main.go:141] libmachine: (embed-certs-598346)     </disk>
	I1207 21:07:38.284291   48213 main.go:141] libmachine: (embed-certs-598346)     <disk type='file' device='disk'>
	I1207 21:07:38.284306   48213 main.go:141] libmachine: (embed-certs-598346)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:07:38.284326   48213 main.go:141] libmachine: (embed-certs-598346)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/embed-certs-598346.rawdisk'/>
	I1207 21:07:38.284338   48213 main.go:141] libmachine: (embed-certs-598346)       <target dev='hda' bus='virtio'/>
	I1207 21:07:38.284351   48213 main.go:141] libmachine: (embed-certs-598346)     </disk>
	I1207 21:07:38.284365   48213 main.go:141] libmachine: (embed-certs-598346)     <interface type='network'>
	I1207 21:07:38.284379   48213 main.go:141] libmachine: (embed-certs-598346)       <source network='mk-embed-certs-598346'/>
	I1207 21:07:38.284397   48213 main.go:141] libmachine: (embed-certs-598346)       <model type='virtio'/>
	I1207 21:07:38.284408   48213 main.go:141] libmachine: (embed-certs-598346)     </interface>
	I1207 21:07:38.284417   48213 main.go:141] libmachine: (embed-certs-598346)     <interface type='network'>
	I1207 21:07:38.284437   48213 main.go:141] libmachine: (embed-certs-598346)       <source network='default'/>
	I1207 21:07:38.284468   48213 main.go:141] libmachine: (embed-certs-598346)       <model type='virtio'/>
	I1207 21:07:38.284482   48213 main.go:141] libmachine: (embed-certs-598346)     </interface>
	I1207 21:07:38.284494   48213 main.go:141] libmachine: (embed-certs-598346)     <serial type='pty'>
	I1207 21:07:38.284506   48213 main.go:141] libmachine: (embed-certs-598346)       <target port='0'/>
	I1207 21:07:38.284514   48213 main.go:141] libmachine: (embed-certs-598346)     </serial>
	I1207 21:07:38.284527   48213 main.go:141] libmachine: (embed-certs-598346)     <console type='pty'>
	I1207 21:07:38.284542   48213 main.go:141] libmachine: (embed-certs-598346)       <target type='serial' port='0'/>
	I1207 21:07:38.284566   48213 main.go:141] libmachine: (embed-certs-598346)     </console>
	I1207 21:07:38.284593   48213 main.go:141] libmachine: (embed-certs-598346)     <rng model='virtio'>
	I1207 21:07:38.284609   48213 main.go:141] libmachine: (embed-certs-598346)       <backend model='random'>/dev/random</backend>
	I1207 21:07:38.284628   48213 main.go:141] libmachine: (embed-certs-598346)     </rng>
	I1207 21:07:38.284645   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.284658   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.284670   48213 main.go:141] libmachine: (embed-certs-598346)   </devices>
	I1207 21:07:38.284680   48213 main.go:141] libmachine: (embed-certs-598346) </domain>
	I1207 21:07:38.284692   48213 main.go:141] libmachine: (embed-certs-598346) 
	I1207 21:07:38.289472   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:c9:10:99 in network default
	I1207 21:07:38.290233   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:07:38.290261   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:38.291102   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:07:38.291517   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:07:38.292241   48213 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:07:38.293138   48213 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:07:35.136377   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.858197818s)
	I1207 21:07:35.136411   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:07:35.136436   47677 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:35.136489   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:35.136408   47677 ssh_runner.go:235] Completed: which crictl: (1.858134043s)
	I1207 21:07:35.136603   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:38.020856   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.884339389s)
	I1207 21:07:38.020877   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:07:38.020896   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:38.020944   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:38.020971   47677 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.884345585s)
	I1207 21:07:38.021039   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:07:38.021133   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:07:37.404458   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:37.404490   47885 machine.go:91] provisioned docker machine in 7.213270016s
	I1207 21:07:37.404503   47885 start.go:300] post-start starting for "pause-763966" (driver="kvm2")
	I1207 21:07:37.404515   47885 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:37.404540   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.404909   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:37.404940   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.407902   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.408368   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408509   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.408711   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.408837   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.408970   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.521762   47885 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:37.526220   47885 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:07:37.526247   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:37.526308   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:37.526416   47885 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:37.526541   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:37.539457   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:37.564052   47885 start.go:303] post-start completed in 159.537127ms
	I1207 21:07:37.564083   47885 fix.go:56] fixHost completed within 7.39656043s
	I1207 21:07:37.564102   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.567031   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567432   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.567462   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567631   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.567849   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568032   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568206   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.568384   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:37.568686   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:37.568707   47885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:07:37.686682   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983257.682984288
	
	I1207 21:07:37.686707   47885 fix.go:206] guest clock: 1701983257.682984288
	I1207 21:07:37.686716   47885 fix.go:219] Guest: 2023-12-07 21:07:37.682984288 +0000 UTC Remote: 2023-12-07 21:07:37.564087197 +0000 UTC m=+55.882358893 (delta=118.897091ms)
	I1207 21:07:37.686771   47885 fix.go:190] guest clock delta is within tolerance: 118.897091ms
	I1207 21:07:37.686780   47885 start.go:83] releasing machines lock for "pause-763966", held for 7.51930022s
	I1207 21:07:37.686812   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.687086   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:37.689968   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690410   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.690448   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690593   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691097   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691281   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691389   47885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:37.691429   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.691532   47885 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:37.691558   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.694652   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.694973   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695096   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695128   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695319   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695451   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695488   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695541   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.695756   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.695922   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695930   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.696478   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.696672   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.696848   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.824802   47885 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:37.833573   47885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:37.992042   47885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:37.998690   47885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:37.998764   47885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:38.008789   47885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 21:07:38.008817   47885 start.go:475] detecting cgroup driver to use...
	I1207 21:07:38.008903   47885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:38.029726   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:38.045392   47885 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:38.045453   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:38.061788   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:38.077501   47885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:07:38.230276   47885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:38.929441   47885 docker.go:219] disabling docker service ...
	I1207 21:07:38.929533   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:38.972952   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:39.000065   47885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:39.365500   47885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:39.657590   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:39.734261   47885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:39.833606   47885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:07:39.833681   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.870335   47885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:07:39.870417   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.901831   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.928228   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.952330   47885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:07:39.972481   47885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:07:39.987141   47885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:07:40.003730   47885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:07:40.274754   47885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:07:41.974747   47885 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.699950937s)
	I1207 21:07:41.974779   47885 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:07:41.974832   47885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:07:41.981723   47885 start.go:543] Will wait 60s for crictl version
	I1207 21:07:41.981786   47885 ssh_runner.go:195] Run: which crictl
	I1207 21:07:41.987013   47885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:07:42.050779   47885 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:07:42.050904   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.110899   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.164304   47885 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:07:38.725495   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:38.725529   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:38.725538   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:38.725546   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:38.725569   46932 retry.go:31] will retry after 460.079146ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.191293   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:39.191323   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:39.191331   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:39.191338   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:39.191354   46932 retry.go:31] will retry after 654.217973ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.851451   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:39.851552   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:39.851566   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:39.851572   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:39.851588   46932 retry.go:31] will retry after 955.752241ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:40.812025   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:40.812059   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:40.812067   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:40.812073   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:40.812091   46932 retry.go:31] will retry after 1.045207444s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:41.863772   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:41.863810   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:41.863818   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:41.863825   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:41.863844   46932 retry.go:31] will retry after 1.532062886s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:43.400344   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:43.400380   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:43.400389   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:43.400395   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:43.400414   46932 retry.go:31] will retry after 1.410839946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.829535   48213 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:07:39.830545   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:39.831044   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:39.831070   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:39.831017   48402 retry.go:31] will retry after 230.292105ms: waiting for machine to come up
	I1207 21:07:40.063716   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.064449   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.064481   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.064359   48402 retry.go:31] will retry after 329.840952ms: waiting for machine to come up
	I1207 21:07:40.396107   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.396746   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.396775   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.396658   48402 retry.go:31] will retry after 455.324621ms: waiting for machine to come up
	I1207 21:07:40.854129   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.854604   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.854629   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.854550   48402 retry.go:31] will retry after 580.382717ms: waiting for machine to come up
	I1207 21:07:41.436363   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:41.436926   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:41.436952   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:41.436875   48402 retry.go:31] will retry after 695.594858ms: waiting for machine to come up
	I1207 21:07:42.134414   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:42.135037   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:42.135069   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:42.134968   48402 retry.go:31] will retry after 822.431255ms: waiting for machine to come up
	I1207 21:07:42.959753   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:42.960319   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:42.960350   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:42.960278   48402 retry.go:31] will retry after 954.543188ms: waiting for machine to come up
	I1207 21:07:43.916120   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:43.916542   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:43.916587   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:43.916526   48402 retry.go:31] will retry after 1.10388154s: waiting for machine to come up
	I1207 21:07:40.305581   47677 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.284418443s)
	I1207 21:07:40.305617   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1207 21:07:40.305646   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1207 21:07:40.305654   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.284686205s)
	I1207 21:07:40.305678   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:07:40.305712   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:40.305755   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:43.426529   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.120745681s)
	I1207 21:07:43.426561   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:07:43.426593   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:43.426641   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:42.165952   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:42.169388   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.169815   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:42.169842   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.170126   47885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:07:42.175657   47885 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:07:42.175717   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.234910   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.234943   47885 crio.go:415] Images already preloaded, skipping extraction
	I1207 21:07:42.235020   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.278372   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.278396   47885 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:07:42.278517   47885 ssh_runner.go:195] Run: crio config
	I1207 21:07:42.444519   47885 cni.go:84] Creating CNI manager for ""
	I1207 21:07:42.444554   47885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:07:42.444586   47885 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:07:42.444620   47885 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763966 NodeName:pause-763966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:07:42.444881   47885 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763966"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:07:42.445014   47885 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-763966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:07:42.445086   47885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:07:42.467395   47885 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:07:42.467487   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:07:42.511628   47885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 21:07:42.544396   47885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:07:42.591065   47885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1207 21:07:42.789598   47885 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I1207 21:07:42.825431   47885 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966 for IP: 192.168.39.237
	I1207 21:07:42.825474   47885 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:07:42.825656   47885 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:07:42.825713   47885 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:07:42.825819   47885 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/client.key
	I1207 21:07:42.825914   47885 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key.cf509944
	I1207 21:07:42.825992   47885 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key
	I1207 21:07:42.826146   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:07:42.826189   47885 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:07:42.826207   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:07:42.826244   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:07:42.826287   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:07:42.826320   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:07:42.826383   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:42.827247   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:07:42.902388   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:07:42.970133   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:07:43.015938   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:07:43.058300   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:07:43.137443   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:07:43.189174   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:07:43.240886   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:07:43.296335   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:07:43.350271   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:07:43.412610   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:07:43.475454   47885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:07:43.526819   47885 ssh_runner.go:195] Run: openssl version
	I1207 21:07:43.546515   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:07:43.561720   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570116   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570205   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.577494   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:07:43.587448   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:07:43.598484   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604317   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604420   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.611072   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:07:43.621498   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:07:43.636404   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645084   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645165   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.657188   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:07:43.672912   47885 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:07:43.681666   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:07:43.694094   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:07:43.705788   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:07:43.719218   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:07:43.732112   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:07:43.744493   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:07:43.764423   47885 kubeadm.go:404] StartCluster: {Name:pause-763966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:07:43.764573   47885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:07:43.764656   47885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:07:43.852222   47885 cri.go:89] found id: "a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022"
	I1207 21:07:43.852249   47885 cri.go:89] found id: "d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d"
	I1207 21:07:43.852259   47885 cri.go:89] found id: "085182fb95992bc23ed02f0be641f942c2f7195cdbc192e5d86f5c2e89beff27"
	I1207 21:07:43.852265   47885 cri.go:89] found id: "37d089b9fc205ebc244d160915340f06e87b5e3b59b75f3b316fb5e333bc21a6"
	I1207 21:07:43.852270   47885 cri.go:89] found id: "3eb4483e3db6fd79059095509f2360ce563cf446b08f2091f8add3d6aa59bd6b"
	I1207 21:07:43.852276   47885 cri.go:89] found id: "531a6b1cf0597b055a9600ccccdc9633c3470679ae44e383bdf594a3f7bb16b7"
	I1207 21:07:43.852282   47885 cri.go:89] found id: ""
	I1207 21:07:43.852335   47885 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:05:13 UTC, ends at Thu 2023-12-07 21:08:24 UTC. --
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.656384909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983304656371153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=d18ca04d-9a23-475c-a641-606831006e5f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.656882156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf725533-e0bc-4792-a5a8-e36dcaa1ab46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.656968646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf725533-e0bc-4792-a5a8-e36dcaa1ab46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.657306752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf725533-e0bc-4792-a5a8-e36dcaa1ab46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.707242059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=009042d4-fede-4750-a199-6030d38a62f7 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.707332563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=009042d4-fede-4750-a199-6030d38a62f7 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.708934650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e62087df-6ac5-43c0-8141-e0bd6a6326e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.709328302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983304709313236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=e62087df-6ac5-43c0-8141-e0bd6a6326e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.709870674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6eff55a6-9e63-4d21-80d9-4669bd8bc8c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.709945985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6eff55a6-9e63-4d21-80d9-4669bd8bc8c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.710194957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6eff55a6-9e63-4d21-80d9-4669bd8bc8c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.758812996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2dcf7618-6146-4ae7-bc99-1d77f96cbbcb name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.758867396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2dcf7618-6146-4ae7-bc99-1d77f96cbbcb name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.760118295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ccb71732-7bd2-45c1-af07-caa66cf71030 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.760646886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983304760632505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ccb71732-7bd2-45c1-af07-caa66cf71030 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.761269681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b7d0b6d0-ee4f-4d0a-bcde-b986f17ad3f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.761347554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b7d0b6d0-ee4f-4d0a-bcde-b986f17ad3f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.761637775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b7d0b6d0-ee4f-4d0a-bcde-b986f17ad3f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.806047771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1154cea6-c247-4787-b889-40c460be8a55 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.806108186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1154cea6-c247-4787-b889-40c460be8a55 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.807246567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=add25b46-16f7-40ac-83da-eb72c151ba7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.807675524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983304807660433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=add25b46-16f7-40ac-83da-eb72c151ba7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.808271062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=afcc2941-3dfc-4d9a-9fb5-27db943fa51f name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.808325549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=afcc2941-3dfc-4d9a-9fb5-27db943fa51f name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:24 pause-763966 crio[2627]: time="2023-12-07 21:08:24.809157534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=afcc2941-3dfc-4d9a-9fb5-27db943fa51f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	311d35afa7adc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago      Running             coredns                   2                   e9cb63e116f1d       coredns-5dd5756b68-l6llq
	2c5c6617b826d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   17 seconds ago      Running             kube-proxy                2                   c4bc3275f1d15       kube-proxy-w976v
	16a1223e03355       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   692d53fd5068b       etcd-pause-763966
	284e513959658       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   23 seconds ago      Running             kube-scheduler            2                   9d5c01a53cb5f       kube-scheduler-pause-763966
	d36b913d5fa93       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   23 seconds ago      Running             kube-controller-manager   2                   8ad986722a0e9       kube-controller-manager-pause-763966
	877f3c78fa25d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   23 seconds ago      Running             kube-apiserver            2                   85990b990dc87       kube-apiserver-pause-763966
	f03715579d42e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   39 seconds ago      Exited              kube-proxy                1                   c4bc3275f1d15       kube-proxy-w976v
	fcedf568f2752       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   40 seconds ago      Exited              coredns                   1                   e9cb63e116f1d       coredns-5dd5756b68-l6llq
	486140b51e777       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   40 seconds ago      Exited              etcd                      1                   692d53fd5068b       etcd-pause-763966
	200422fadb373       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   40 seconds ago      Exited              kube-scheduler            1                   9d5c01a53cb5f       kube-scheduler-pause-763966
	a3701acc6ea51       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   45 seconds ago      Exited              kube-controller-manager   1                   3dfe206eeb05a       kube-controller-manager-pause-763966
	d538927394a7e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   45 seconds ago      Exited              kube-apiserver            1                   652c03a9919f7       kube-apiserver-pause-763966
	
	* 
	* ==> coredns [311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40587 - 42611 "HINFO IN 8037292115368977335.5329363017436627300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019716054s
	
	* 
	* ==> coredns [fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58351 - 33422 "HINFO IN 5220376274418374812.7681633137589353701. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01942101s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-763966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-763966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=pause-763966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_05_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-763966
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:08:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    pause-763966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 aea938af658948548c6c83be99b33cd4
	  System UUID:                aea938af-6589-4854-8c6c-83be99b33cd4
	  Boot ID:                    437d9fe2-13fe-4f5c-8a8d-ae272544b72e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-l6llq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m26s
	  kube-system                 etcd-pause-763966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-apiserver-pause-763966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-controller-manager-pause-763966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-proxy-w976v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-scheduler-pause-763966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m22s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s                  kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s                  kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s                  kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m39s                  kubelet          Node pause-763966 status is now: NodeReady
	  Normal  RegisteredNode           2m28s                  node-controller  Node pause-763966 event: Registered Node pause-763966 in Controller
	  Normal  Starting                 24s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)      kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)      kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)      kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                     node-controller  Node pause-763966 event: Registered Node pause-763966 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073238] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826361] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.871270] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.178240] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.208495] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.016878] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.128570] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.149674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.126621] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.233538] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +10.056516] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +8.760043] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[Dec 7 21:06] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 7 21:07] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.592943] systemd-fstab-generator[2375]: Ignoring "noauto" for root device
	[  +0.406459] systemd-fstab-generator[2423]: Ignoring "noauto" for root device
	[  +0.324206] systemd-fstab-generator[2434]: Ignoring "noauto" for root device
	[  +0.640938] systemd-fstab-generator[2516]: Ignoring "noauto" for root device
	[Dec 7 21:08] systemd-fstab-generator[3511]: Ignoring "noauto" for root device
	[  +7.036151] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc] <==
	* {"level":"warn","ts":"2023-12-07T21:08:17.618223Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:16.953258Z","time spent":"664.918695ms","remote":"127.0.0.1:36426","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:18.695042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.420117ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-07T21:08:18.695116Z","caller":"traceutil/trace.go:171","msg":"trace[1420117760] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:508; }","duration":"216.510083ms","start":"2023-12-07T21:08:18.478596Z","end":"2023-12-07T21:08:18.695106Z","steps":["trace[1420117760] 'range keys from in-memory index tree'  (duration: 216.391111ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:08:18.695501Z","caller":"traceutil/trace.go:171","msg":"trace[1755123161] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"514.495155ms","start":"2023-12-07T21:08:18.180898Z","end":"2023-12-07T21:08:18.695393Z","steps":["trace[1755123161] 'process raft request'  (duration: 513.814204ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:18.695581Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.180369Z","time spent":"515.162916ms","remote":"127.0.0.1:36374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.237\" mod_revision:454 > success:<request_put:<key:\"/registry/masterleases/192.168.39.237\" value_size:67 lease:6971163505960064215 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.237\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:19.113006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.760977ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16194535542814840027 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" mod_revision:450 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-07T21:08:19.113389Z","caller":"traceutil/trace.go:171","msg":"trace[510084002] linearizableReadLoop","detail":"{readStateIndex:561; appliedIndex:559; }","duration":"660.061762ms","start":"2023-12-07T21:08:18.453314Z","end":"2023-12-07T21:08:19.113376Z","steps":["trace[510084002] 'read index received'  (duration: 241.406765ms)","trace[510084002] 'applied index is now lower than readState.Index'  (duration: 418.653925ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:08:19.113563Z","caller":"traceutil/trace.go:171","msg":"trace[825534801] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"927.325591ms","start":"2023-12-07T21:08:18.186227Z","end":"2023-12-07T21:08:19.113552Z","steps":["trace[825534801] 'process raft request'  (duration: 799.965878ms)","trace[825534801] 'compare'  (duration: 126.673353ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:08:19.113993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.186211Z","time spent":"927.752137ms","remote":"127.0.0.1:36408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4376,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" mod_revision:450 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:19.113631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"660.32649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" ","response":"range_response_count:1 size:6830"}
	{"level":"warn","ts":"2023-12-07T21:08:19.1139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.970456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"warn","ts":"2023-12-07T21:08:19.11393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.310078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-07T21:08:19.115003Z","caller":"traceutil/trace.go:171","msg":"trace[1251109391] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-763966; range_end:; response_count:1; response_revision:510; }","duration":"661.704004ms","start":"2023-12-07T21:08:18.453289Z","end":"2023-12-07T21:08:19.114993Z","steps":["trace[1251109391] 'agreement among raft nodes before linearized reading'  (duration: 660.289909ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.115159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.453277Z","time spent":"661.870339ms","remote":"127.0.0.1:36408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":6853,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" "}
	{"level":"info","ts":"2023-12-07T21:08:19.115053Z","caller":"traceutil/trace.go:171","msg":"trace[43502776] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:510; }","duration":"417.129018ms","start":"2023-12-07T21:08:18.697917Z","end":"2023-12-07T21:08:19.115046Z","steps":["trace[43502776] 'agreement among raft nodes before linearized reading'  (duration: 415.948301ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.115373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.697906Z","time spent":"417.459533ms","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":445,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"info","ts":"2023-12-07T21:08:19.115124Z","caller":"traceutil/trace.go:171","msg":"trace[915616064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:510; }","duration":"233.502996ms","start":"2023-12-07T21:08:18.881616Z","end":"2023-12-07T21:08:19.115119Z","steps":["trace[915616064] 'agreement among raft nodes before linearized reading'  (duration: 232.297715ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:08:19.385989Z","caller":"traceutil/trace.go:171","msg":"trace[1716342016] linearizableReadLoop","detail":"{readStateIndex:562; appliedIndex:561; }","duration":"260.25328ms","start":"2023-12-07T21:08:19.125713Z","end":"2023-12-07T21:08:19.385966Z","steps":["trace[1716342016] 'read index received'  (duration: 177.617536ms)","trace[1716342016] 'applied index is now lower than readState.Index'  (duration: 82.635154ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:08:19.386364Z","caller":"traceutil/trace.go:171","msg":"trace[532659987] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"261.014093ms","start":"2023-12-07T21:08:19.125337Z","end":"2023-12-07T21:08:19.386351Z","steps":["trace[532659987] 'process raft request'  (duration: 178.036714ms)","trace[532659987] 'compare'  (duration: 82.513469ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:08:19.386549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.814282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2023-12-07T21:08:19.386602Z","caller":"traceutil/trace.go:171","msg":"trace[259796995] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:511; }","duration":"257.861461ms","start":"2023-12-07T21:08:19.128732Z","end":"2023-12-07T21:08:19.386593Z","steps":["trace[259796995] 'agreement among raft nodes before linearized reading'  (duration: 257.8026ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.386465Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.67189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2023-12-07T21:08:19.386774Z","caller":"traceutil/trace.go:171","msg":"trace[780446668] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:511; }","duration":"261.069543ms","start":"2023-12-07T21:08:19.125696Z","end":"2023-12-07T21:08:19.386765Z","steps":["trace[780446668] 'agreement among raft nodes before linearized reading'  (duration: 260.572358ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.386525Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.504931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" ","response":"range_response_count:1 size:6830"}
	{"level":"info","ts":"2023-12-07T21:08:19.386911Z","caller":"traceutil/trace.go:171","msg":"trace[1102237510] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-763966; range_end:; response_count:1; response_revision:511; }","duration":"258.888032ms","start":"2023-12-07T21:08:19.128014Z","end":"2023-12-07T21:08:19.386902Z","steps":["trace[1102237510] 'agreement among raft nodes before linearized reading'  (duration: 258.483298ms)"],"step_count":1}
	
	* 
	* ==> etcd [486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb] <==
	* {"level":"info","ts":"2023-12-07T21:07:45.347269Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T21:07:46.4045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.40467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.404749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgPreVoteResp from 3f0f97df8a50e0be at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.404808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgVoteResp from 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became leader at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f0f97df8a50e0be elected leader 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.473864Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3f0f97df8a50e0be","local-member-attributes":"{Name:pause-763966 ClientURLs:[https://192.168.39.237:2379]}","request-path":"/0/members/3f0f97df8a50e0be/attributes","cluster-id":"db2c13b3d7f66f6a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:07:46.474143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:07:46.47598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:07:46.481847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.237:2379"}
	{"level":"info","ts":"2023-12-07T21:07:46.478394Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:07:46.485668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:07:46.485815Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:07:59.306232Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-07T21:07:59.306342Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-763966","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	{"level":"warn","ts":"2023-12-07T21:07:59.306645Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.3067Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.308359Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.308502Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-07T21:07:59.308642Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3f0f97df8a50e0be","current-leader-member-id":"3f0f97df8a50e0be"}
	{"level":"info","ts":"2023-12-07T21:07:59.312748Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-12-07T21:07:59.312906Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-12-07T21:07:59.312951Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-763966","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	
	* 
	* ==> kernel <==
	*  21:08:25 up 3 min,  0 users,  load average: 2.21, 0.86, 0.32
	Linux pause-763966 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0] <==
	* I1207 21:08:08.800090       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 21:08:08.852095       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 21:08:08.888216       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 21:08:08.895031       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 21:08:17.619902       1 trace.go:236] Trace[1244975485]: "Get" accept:application/json, */*,audit-id:f378f76d-ccfa-4933-92ba-d5b4b9c07d91,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-763966,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Dec-2023 21:08:16.953) (total time: 666ms):
	Trace[1244975485]: ---"About to write a response" 665ms (21:08:17.619)
	Trace[1244975485]: [666.202561ms] [666.202561ms] END
	I1207 21:08:17.620143       1 trace.go:236] Trace[1268883288]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f528a569-af7f-423c-ad9e-8a1623164e1c,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-kese4b4tus3b6qiuxusitu3ex4,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (07-Dec-2023 21:08:16.951) (total time: 667ms):
	Trace[1268883288]: ["GuaranteedUpdate etcd3" audit-id:f528a569-af7f-423c-ad9e-8a1623164e1c,key:/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4,type:*coordination.Lease,resource:leases.coordination.k8s.io 667ms (21:08:16.951)
	Trace[1268883288]:  ---"Txn call completed" 666ms (21:08:17.619)]
	Trace[1268883288]: [667.990062ms] [667.990062ms] END
	I1207 21:08:18.696543       1 trace.go:236] Trace[1578833462]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.237,type:*v1.Endpoints,resource:apiServerIPInfo (07-Dec-2023 21:08:18.034) (total time: 662ms):
	Trace[1578833462]: ---"Transaction prepared" 144ms (21:08:18.180)
	Trace[1578833462]: ---"Txn call completed" 516ms (21:08:18.696)
	Trace[1578833462]: [662.163885ms] [662.163885ms] END
	I1207 21:08:19.117905       1 trace.go:236] Trace[1540491190]: "Get" accept:application/json, */*,audit-id:38097c31-cf7e-4bd5-af68-f239a8b200fc,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-763966,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Dec-2023 21:08:18.452) (total time: 665ms):
	Trace[1540491190]: ---"About to write a response" 663ms (21:08:19.115)
	Trace[1540491190]: [665.059274ms] [665.059274ms] END
	I1207 21:08:19.118193       1 trace.go:236] Trace[1483288823]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:729ff8c8-c917-4f10-be80-8ca3bac30aef,client:192.168.39.237,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-763966/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (07-Dec-2023 21:08:18.183) (total time: 935ms):
	Trace[1483288823]: ["GuaranteedUpdate etcd3" audit-id:729ff8c8-c917-4f10-be80-8ca3bac30aef,key:/pods/kube-system/kube-scheduler-pause-763966,type:*core.Pod,resource:pods 934ms (21:08:18.183)
	Trace[1483288823]:  ---"Txn call completed" 930ms (21:08:19.116)]
	Trace[1483288823]: ---"Object stored in database" 931ms (21:08:19.116)
	Trace[1483288823]: [935.037612ms] [935.037612ms] END
	I1207 21:08:19.591039       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 21:08:19.648343       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d] <==
	* 
	* 
	* ==> kube-controller-manager [a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022] <==
	* 
	* 
	* ==> kube-controller-manager [d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c] <==
	* I1207 21:08:19.602629       1 shared_informer.go:318] Caches are synced for attach detach
	I1207 21:08:19.602766       1 shared_informer.go:318] Caches are synced for PVC protection
	I1207 21:08:19.602809       1 shared_informer.go:318] Caches are synced for service account
	I1207 21:08:19.605267       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1207 21:08:19.605333       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1207 21:08:19.605284       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1207 21:08:19.609376       1 shared_informer.go:318] Caches are synced for TTL
	I1207 21:08:19.628266       1 shared_informer.go:318] Caches are synced for PV protection
	I1207 21:08:19.630584       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1207 21:08:19.633150       1 shared_informer.go:318] Caches are synced for GC
	I1207 21:08:19.692168       1 shared_informer.go:318] Caches are synced for daemon sets
	I1207 21:08:19.716661       1 shared_informer.go:318] Caches are synced for taint
	I1207 21:08:19.716935       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1207 21:08:19.717034       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1207 21:08:19.717096       1 taint_manager.go:210] "Sending events to api server"
	I1207 21:08:19.717130       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-763966"
	I1207 21:08:19.717210       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 21:08:19.717555       1 event.go:307] "Event occurred" object="pause-763966" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-763966 event: Registered Node pause-763966 in Controller"
	I1207 21:08:19.725762       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 21:08:19.752636       1 shared_informer.go:318] Caches are synced for deployment
	I1207 21:08:19.785617       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 21:08:19.808643       1 shared_informer.go:318] Caches are synced for disruption
	I1207 21:08:20.131218       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 21:08:20.131324       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 21:08:20.154473       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28] <==
	* I1207 21:08:07.638882       1 server_others.go:69] "Using iptables proxy"
	I1207 21:08:07.648245       1 node.go:141] Successfully retrieved node IP: 192.168.39.237
	I1207 21:08:07.685668       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:08:07.685724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:08:07.688524       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:08:07.688622       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:08:07.688861       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:08:07.688896       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:08:07.689997       1 config.go:188] "Starting service config controller"
	I1207 21:08:07.690062       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:08:07.690086       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:08:07.690118       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:08:07.690732       1 config.go:315] "Starting node config controller"
	I1207 21:08:07.690770       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:08:07.790796       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:08:07.790877       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:08:07.790894       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447] <==
	* I1207 21:07:46.121250       1 server_others.go:69] "Using iptables proxy"
	E1207 21:07:46.125210       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:47.190094       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:49.410295       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:53.685250       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868] <==
	* E1207 21:07:54.635258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:54.923990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.237:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:54.924060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.237:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.040944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.041097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.123091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.123184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.217220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.217382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.651008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.651128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.701751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.701887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:56.664101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:56.664218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:56.946586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:56.946720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:57.228009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:57.228135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:59.460262       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1207 21:07:59.460973       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1207 21:07:59.461048       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1207 21:07:59.461091       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 21:07:59.461712       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 21:07:59.461888       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190] <==
	* I1207 21:08:04.060742       1 serving.go:348] Generated self-signed cert in-memory
	W1207 21:08:06.764958       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 21:08:06.765037       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 21:08:06.765065       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 21:08:06.765088       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 21:08:06.822976       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1207 21:08:06.823059       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:08:06.826516       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 21:08:06.826639       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 21:08:06.828950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1207 21:08:06.829037       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 21:08:06.928346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:05:13 UTC, ends at Thu 2023-12-07 21:08:25 UTC. --
	Dec 07 21:08:01 pause-763966 kubelet[3517]: W1207 21:08:01.975396    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-763966&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:01 pause-763966 kubelet[3517]: E1207 21:08:01.975513    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-763966&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.068320    3517 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-763966.179ea8c6d41a293d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-763966", UID:"pause-763966", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-763966"}, FirstTimestamp:time.Date(2023, time.December, 7, 21, 8, 1, 108101437, time.Local), LastTimestamp:time.Date(2
023, time.December, 7, 21, 8, 1, 108101437, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-763966"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.237:8443: connect: connection refused'(may retry after sleeping)
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.534016    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.534070    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.539675    3517 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-763966?timeout=10s\": dial tcp 192.168.39.237:8443: connect: connection refused" interval="1.6s"
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.591324    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.591374    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.639194    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.639250    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: I1207 21:08:02.652717    3517 kubelet_node_status.go:70] "Attempting to register node" node="pause-763966"
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.653114    3517 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.237:8443: connect: connection refused" node="pause-763966"
	Dec 07 21:08:04 pause-763966 kubelet[3517]: I1207 21:08:04.254838    3517 kubelet_node_status.go:70] "Attempting to register node" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.877498    3517 kubelet_node_status.go:108] "Node was previously registered" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.877598    3517 kubelet_node_status.go:73] "Successfully registered node" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.879625    3517 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.880542    3517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.102281    3517 apiserver.go:52] "Watching apiserver"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.111358    3517 topology_manager.go:215] "Topology Admit Handler" podUID="fb4ba2f0-5660-4044-9f09-2af3a79c8599" podNamespace="kube-system" podName="kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.114905    3517 topology_manager.go:215] "Topology Admit Handler" podUID="0336a5ef-6d08-4058-acfe-4ec206ae8c93" podNamespace="kube-system" podName="coredns-5dd5756b68-l6llq"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.133473    3517 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.177826    3517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb4ba2f0-5660-4044-9f09-2af3a79c8599-lib-modules\") pod \"kube-proxy-w976v\" (UID: \"fb4ba2f0-5660-4044-9f09-2af3a79c8599\") " pod="kube-system/kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.177857    3517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb4ba2f0-5660-4044-9f09-2af3a79c8599-xtables-lock\") pod \"kube-proxy-w976v\" (UID: \"fb4ba2f0-5660-4044-9f09-2af3a79c8599\") " pod="kube-system/kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.416235    3517 scope.go:117] "RemoveContainer" containerID="fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.416646    3517 scope.go:117] "RemoveContainer" containerID="f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:08:24.339420   48887 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17719-9628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763966 -n pause-763966
helpers_test.go:261: (dbg) Run:  kubectl --context pause-763966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-763966 -n pause-763966
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-763966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-763966 logs -n 25: (1.587413682s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat cri-docker                              |                        |         |         |                     |                     |
	|         | --no-pager                                            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | cri-dockerd --version                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl status containerd                           |                        |         |         |                     |                     |
	|         | --all --full --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat containerd                              |                        |         |         |                     |                     |
	|         | --no-pager                                            |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service                |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo cat                             | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/containerd/config.toml                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | containerd config dump                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl status crio --all                           |                        |         |         |                     |                     |
	|         | --full --no-pager                                     |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo                                 | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo find                            | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                        |         |         |                     |                     |
	| ssh     | -p cilium-715748 sudo crio                            | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | config                                                |                        |         |         |                     |                     |
	| delete  | -p cilium-715748                                      | cilium-715748          | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC | 07 Dec 23 21:05 UTC |
	| start   | -p old-k8s-version-483745                             | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                        |         |         |                     |                     |
	|         | --kvm-network=default                                 |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                        |         |         |                     |                     |
	|         | --keep-context=false                                  |                        |         |         |                     |                     |
	|         | --driver=kvm2                                         |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                          |                        |         |         |                     |                     |
	| start   | -p stopped-upgrade-099448                             | stopped-upgrade-099448 | jenkins | v1.32.0 | 07 Dec 23 21:05 UTC |                     |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr                                     |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	| ssh     | cert-options-620116 ssh                               | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | openssl x509 -text -noout -in                         |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                        |         |         |                     |                     |
	| ssh     | -p cert-options-620116 -- sudo                        | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                        |         |         |                     |                     |
	| delete  | -p cert-options-620116                                | cert-options-620116    | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                  | no-preload-950431      | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC |                     |
	|         | --memory=2200 --alsologtostderr                       |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                           |                        |         |         |                     |                     |
	|         | --driver=kvm2                                         |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                     |                        |         |         |                     |                     |
	| start   | -p pause-763966                                       | pause-763966           | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                     |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                        |         |         |                     |                     |
	|         | --container-runtime=crio                              |                        |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                             | stopped-upgrade-099448 | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                 | embed-certs-598346     | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC |                     |
	|         | --memory=2200                                         |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                          |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745       | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                             | old-k8s-version-483745 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                |                        |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:07:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:07:04.157683   48213 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:07:04.158063   48213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:07:04.158075   48213 out.go:309] Setting ErrFile to fd 2...
	I1207 21:07:04.158082   48213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:07:04.158349   48213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:07:04.159166   48213 out.go:303] Setting JSON to false
	I1207 21:07:04.160409   48213 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6570,"bootTime":1701976654,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:07:04.160491   48213 start.go:138] virtualization: kvm guest
	I1207 21:07:04.163051   48213 out.go:177] * [embed-certs-598346] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:07:04.164721   48213 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:07:04.164735   48213 notify.go:220] Checking for updates...
	I1207 21:07:04.166228   48213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:07:04.167848   48213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:07:04.169308   48213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:04.170875   48213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:07:04.172340   48213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:07:04.174340   48213 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:07:04.174453   48213 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:07:04.174587   48213 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:07:04.174677   48213 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:07:04.213520   48213 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:07:04.214791   48213 start.go:298] selected driver: kvm2
	I1207 21:07:04.214805   48213 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:07:04.214816   48213 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:07:04.215568   48213 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:07:04.215652   48213 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:07:04.231808   48213 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:07:04.231847   48213 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 21:07:04.232086   48213 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:07:04.232148   48213 cni.go:84] Creating CNI manager for ""
	I1207 21:07:04.232165   48213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:07:04.232185   48213 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 21:07:04.232195   48213 start_flags.go:323] config:
	{Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:07:04.232404   48213 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:07:04.234355   48213 out.go:177] * Starting control plane node embed-certs-598346 in cluster embed-certs-598346
	I1207 21:07:01.601645   47677 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 21:07:01.601802   47677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:01.601849   47677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:01.618878   47677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I1207 21:07:01.619289   47677 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:01.619811   47677 main.go:141] libmachine: Using API Version  1
	I1207 21:07:01.619850   47677 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:01.620192   47677 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:01.620415   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:01.620584   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:01.620761   47677 start.go:159] libmachine.API.Create for "no-preload-950431" (driver="kvm2")
	I1207 21:07:01.620789   47677 client.go:168] LocalClient.Create starting
	I1207 21:07:01.620820   47677 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:07:01.620858   47677 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:01.620887   47677 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:01.620955   47677 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:07:01.620985   47677 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:01.621010   47677 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:01.621041   47677 main.go:141] libmachine: Running pre-create checks...
	I1207 21:07:01.621055   47677 main.go:141] libmachine: (no-preload-950431) Calling .PreCreateCheck
	I1207 21:07:01.621368   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:01.621772   47677 main.go:141] libmachine: Creating machine...
	I1207 21:07:01.621785   47677 main.go:141] libmachine: (no-preload-950431) Calling .Create
	I1207 21:07:01.621909   47677 main.go:141] libmachine: (no-preload-950431) Creating KVM machine...
	I1207 21:07:01.623049   47677 main.go:141] libmachine: (no-preload-950431) DBG | found existing default KVM network
	I1207 21:07:01.624314   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.624141   47999 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:ad:58} reservation:<nil>}
	I1207 21:07:01.625488   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.625366   47999 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ac720}
	I1207 21:07:01.631356   47677 main.go:141] libmachine: (no-preload-950431) DBG | trying to create private KVM network mk-no-preload-950431 192.168.50.0/24...
	I1207 21:07:01.705010   47677 main.go:141] libmachine: (no-preload-950431) DBG | private KVM network mk-no-preload-950431 192.168.50.0/24 created
	I1207 21:07:01.705057   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.704972   47999 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:01.705077   47677 main.go:141] libmachine: (no-preload-950431) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 ...
	I1207 21:07:01.705094   47677 main.go:141] libmachine: (no-preload-950431) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:07:01.705123   47677 main.go:141] libmachine: (no-preload-950431) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:07:01.917863   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:01.917745   47999 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa...
	I1207 21:07:02.023714   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:02.023601   47999 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/no-preload-950431.rawdisk...
	I1207 21:07:02.023756   47677 main.go:141] libmachine: (no-preload-950431) DBG | Writing magic tar header
	I1207 21:07:02.023779   47677 main.go:141] libmachine: (no-preload-950431) DBG | Writing SSH key tar header
	I1207 21:07:02.023794   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:02.023746   47999 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 ...
	I1207 21:07:02.023914   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431
	I1207 21:07:02.023956   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:07:02.023972   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431 (perms=drwx------)
	I1207 21:07:02.023993   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:07:02.024009   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:07:02.024032   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:07:02.024054   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:07:02.024069   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:02.024089   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:07:02.024105   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:07:02.024121   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:07:02.024138   47677 main.go:141] libmachine: (no-preload-950431) DBG | Checking permissions on dir: /home
	I1207 21:07:02.024156   47677 main.go:141] libmachine: (no-preload-950431) DBG | Skipping /home - not owner
	I1207 21:07:02.024170   47677 main.go:141] libmachine: (no-preload-950431) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:07:02.024185   47677 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:07:02.025314   47677 main.go:141] libmachine: (no-preload-950431) define libvirt domain using xml: 
	I1207 21:07:02.025341   47677 main.go:141] libmachine: (no-preload-950431) <domain type='kvm'>
	I1207 21:07:02.025383   47677 main.go:141] libmachine: (no-preload-950431)   <name>no-preload-950431</name>
	I1207 21:07:02.025423   47677 main.go:141] libmachine: (no-preload-950431)   <memory unit='MiB'>2200</memory>
	I1207 21:07:02.025437   47677 main.go:141] libmachine: (no-preload-950431)   <vcpu>2</vcpu>
	I1207 21:07:02.025448   47677 main.go:141] libmachine: (no-preload-950431)   <features>
	I1207 21:07:02.025459   47677 main.go:141] libmachine: (no-preload-950431)     <acpi/>
	I1207 21:07:02.025471   47677 main.go:141] libmachine: (no-preload-950431)     <apic/>
	I1207 21:07:02.025480   47677 main.go:141] libmachine: (no-preload-950431)     <pae/>
	I1207 21:07:02.025491   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.025543   47677 main.go:141] libmachine: (no-preload-950431)   </features>
	I1207 21:07:02.025561   47677 main.go:141] libmachine: (no-preload-950431)   <cpu mode='host-passthrough'>
	I1207 21:07:02.025568   47677 main.go:141] libmachine: (no-preload-950431)   
	I1207 21:07:02.025576   47677 main.go:141] libmachine: (no-preload-950431)   </cpu>
	I1207 21:07:02.025592   47677 main.go:141] libmachine: (no-preload-950431)   <os>
	I1207 21:07:02.025601   47677 main.go:141] libmachine: (no-preload-950431)     <type>hvm</type>
	I1207 21:07:02.025608   47677 main.go:141] libmachine: (no-preload-950431)     <boot dev='cdrom'/>
	I1207 21:07:02.025616   47677 main.go:141] libmachine: (no-preload-950431)     <boot dev='hd'/>
	I1207 21:07:02.025627   47677 main.go:141] libmachine: (no-preload-950431)     <bootmenu enable='no'/>
	I1207 21:07:02.025648   47677 main.go:141] libmachine: (no-preload-950431)   </os>
	I1207 21:07:02.025662   47677 main.go:141] libmachine: (no-preload-950431)   <devices>
	I1207 21:07:02.025672   47677 main.go:141] libmachine: (no-preload-950431)     <disk type='file' device='cdrom'>
	I1207 21:07:02.025689   47677 main.go:141] libmachine: (no-preload-950431)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/boot2docker.iso'/>
	I1207 21:07:02.025702   47677 main.go:141] libmachine: (no-preload-950431)       <target dev='hdc' bus='scsi'/>
	I1207 21:07:02.025714   47677 main.go:141] libmachine: (no-preload-950431)       <readonly/>
	I1207 21:07:02.025729   47677 main.go:141] libmachine: (no-preload-950431)     </disk>
	I1207 21:07:02.025744   47677 main.go:141] libmachine: (no-preload-950431)     <disk type='file' device='disk'>
	I1207 21:07:02.025757   47677 main.go:141] libmachine: (no-preload-950431)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:07:02.025795   47677 main.go:141] libmachine: (no-preload-950431)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/no-preload-950431.rawdisk'/>
	I1207 21:07:02.025812   47677 main.go:141] libmachine: (no-preload-950431)       <target dev='hda' bus='virtio'/>
	I1207 21:07:02.025822   47677 main.go:141] libmachine: (no-preload-950431)     </disk>
	I1207 21:07:02.025829   47677 main.go:141] libmachine: (no-preload-950431)     <interface type='network'>
	I1207 21:07:02.025842   47677 main.go:141] libmachine: (no-preload-950431)       <source network='mk-no-preload-950431'/>
	I1207 21:07:02.025855   47677 main.go:141] libmachine: (no-preload-950431)       <model type='virtio'/>
	I1207 21:07:02.025877   47677 main.go:141] libmachine: (no-preload-950431)     </interface>
	I1207 21:07:02.025896   47677 main.go:141] libmachine: (no-preload-950431)     <interface type='network'>
	I1207 21:07:02.025906   47677 main.go:141] libmachine: (no-preload-950431)       <source network='default'/>
	I1207 21:07:02.025911   47677 main.go:141] libmachine: (no-preload-950431)       <model type='virtio'/>
	I1207 21:07:02.025933   47677 main.go:141] libmachine: (no-preload-950431)     </interface>
	I1207 21:07:02.025950   47677 main.go:141] libmachine: (no-preload-950431)     <serial type='pty'>
	I1207 21:07:02.025972   47677 main.go:141] libmachine: (no-preload-950431)       <target port='0'/>
	I1207 21:07:02.025991   47677 main.go:141] libmachine: (no-preload-950431)     </serial>
	I1207 21:07:02.026004   47677 main.go:141] libmachine: (no-preload-950431)     <console type='pty'>
	I1207 21:07:02.026017   47677 main.go:141] libmachine: (no-preload-950431)       <target type='serial' port='0'/>
	I1207 21:07:02.026034   47677 main.go:141] libmachine: (no-preload-950431)     </console>
	I1207 21:07:02.026050   47677 main.go:141] libmachine: (no-preload-950431)     <rng model='virtio'>
	I1207 21:07:02.026065   47677 main.go:141] libmachine: (no-preload-950431)       <backend model='random'>/dev/random</backend>
	I1207 21:07:02.026077   47677 main.go:141] libmachine: (no-preload-950431)     </rng>
	I1207 21:07:02.026089   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.026106   47677 main.go:141] libmachine: (no-preload-950431)     
	I1207 21:07:02.026119   47677 main.go:141] libmachine: (no-preload-950431)   </devices>
	I1207 21:07:02.026134   47677 main.go:141] libmachine: (no-preload-950431) </domain>
	I1207 21:07:02.026149   47677 main.go:141] libmachine: (no-preload-950431) 
	I1207 21:07:02.030791   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:c3:dc:38 in network default
	I1207 21:07:02.031366   47677 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:07:02.031400   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:02.032131   47677 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:07:02.032431   47677 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:07:02.033040   47677 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:07:02.033819   47677 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:07:03.695080   47677 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:07:03.696019   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:03.696557   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:03.696586   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:03.696531   47999 retry.go:31] will retry after 310.733444ms: waiting for machine to come up
	I1207 21:07:04.008957   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.009459   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.009490   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.009432   47999 retry.go:31] will retry after 321.879279ms: waiting for machine to come up
	I1207 21:07:04.334271   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.334755   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.334784   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.334717   47999 retry.go:31] will retry after 378.524792ms: waiting for machine to come up
	I1207 21:07:04.715210   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:04.715782   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:04.715810   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:04.715719   47999 retry.go:31] will retry after 389.607351ms: waiting for machine to come up
	I1207 21:07:04.192647   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:06.691664   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:08.692066   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:04.235630   48213 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:07:04.235662   48213 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:07:04.235668   48213 cache.go:56] Caching tarball of preloaded images
	I1207 21:07:04.235750   48213 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:07:04.235763   48213 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:07:04.235850   48213 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:07:04.235888   48213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json: {Name:mk6253fc7de4a52e34595793c259307458a0de3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:07:04.236050   48213 start.go:365] acquiring machines lock for embed-certs-598346: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:07:05.107351   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:05.107866   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:05.107896   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:05.107830   47999 retry.go:31] will retry after 680.922555ms: waiting for machine to come up
	I1207 21:07:05.790667   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:05.791196   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:05.791256   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:05.791131   47999 retry.go:31] will retry after 773.589238ms: waiting for machine to come up
	I1207 21:07:06.565801   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:06.566216   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:06.566245   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:06.566161   47999 retry.go:31] will retry after 1.172647624s: waiting for machine to come up
	I1207 21:07:07.740835   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:07.741251   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:07.741274   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:07.741213   47999 retry.go:31] will retry after 1.281716702s: waiting for machine to come up
	I1207 21:07:09.024381   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:09.024894   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:09.024920   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:09.024848   47999 retry.go:31] will retry after 1.3476333s: waiting for machine to come up
	I1207 21:07:10.693386   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:13.193600   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:10.374187   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:10.374745   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:10.374776   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:10.374692   47999 retry.go:31] will retry after 1.507121871s: waiting for machine to come up
	I1207 21:07:11.883107   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:11.883625   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:11.883656   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:11.883567   47999 retry.go:31] will retry after 1.85350099s: waiting for machine to come up
	I1207 21:07:13.739119   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:13.739620   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:13.739655   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:13.739572   47999 retry.go:31] will retry after 3.34155315s: waiting for machine to come up
	I1207 21:07:15.692450   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:17.692705   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:17.082837   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:17.083289   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:17.083320   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:17.083237   47999 retry.go:31] will retry after 3.305771578s: waiting for machine to come up
	I1207 21:07:20.192134   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:22.192823   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:20.392762   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:20.393285   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:07:20.393307   47677 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:07:20.393261   47999 retry.go:31] will retry after 5.192401612s: waiting for machine to come up
	I1207 21:07:24.691247   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:27.191865   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:25.586975   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.587454   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.587475   47677 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:07:25.587488   47677 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:07:25.587769   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431
	I1207 21:07:25.668963   47677 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:07:25.668993   47677 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:07:25.669017   47677 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:07:25.671914   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:25.672296   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431
	I1207 21:07:25.672327   47677 main.go:141] libmachine: (no-preload-950431) DBG | unable to find defined IP address of network mk-no-preload-950431 interface with MAC address 52:54:00:80:97:8f
	I1207 21:07:25.672450   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:07:25.672483   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:07:25.672526   47677 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:07:25.672541   47677 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:07:25.672558   47677 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:07:25.676079   47677 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: exit status 255: 
	I1207 21:07:25.676107   47677 main.go:141] libmachine: (no-preload-950431) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1207 21:07:25.676119   47677 main.go:141] libmachine: (no-preload-950431) DBG | command : exit 0
	I1207 21:07:25.676138   47677 main.go:141] libmachine: (no-preload-950431) DBG | err     : exit status 255
	I1207 21:07:25.676185   47677 main.go:141] libmachine: (no-preload-950431) DBG | output  : 
	I1207 21:07:28.676733   47677 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:07:28.679340   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.679648   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.679677   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.679754   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:07:28.679787   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:07:28.679835   47677 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:07:28.679846   47677 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:07:28.679859   47677 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:07:28.765563   47677 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:07:28.765828   47677 main.go:141] libmachine: (no-preload-950431) KVM machine creation complete!
	I1207 21:07:28.766119   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:28.766612   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:28.766771   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:28.766928   47677 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 21:07:28.766946   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:07:28.768119   47677 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 21:07:28.768132   47677 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 21:07:28.768138   47677 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 21:07:28.768152   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:28.770474   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.770784   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.770817   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.770969   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:28.771149   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.771326   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.771489   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:28.771649   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:28.772101   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:28.772117   47677 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 21:07:28.881201   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:28.881229   47677 main.go:141] libmachine: Detecting the provisioner...
	I1207 21:07:28.881240   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:28.884077   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.884437   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:28.884467   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:28.884693   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:28.884876   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.885051   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:28.885202   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:28.885392   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:28.885846   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:28.885865   47677 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 21:07:29.002722   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 21:07:29.002806   47677 main.go:141] libmachine: found compatible host: buildroot
	I1207 21:07:29.002828   47677 main.go:141] libmachine: Provisioning with buildroot...
	I1207 21:07:29.002839   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.003122   47677 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:07:29.003161   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.003405   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.006420   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.007128   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.007173   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.007469   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.007832   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.008145   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.008487   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.008803   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.009550   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.009591   47677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:07:29.130183   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:07:29.130213   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.132925   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.133251   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.133284   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.133436   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.133606   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.133761   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.133872   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.134060   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.134453   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.134473   47677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:29.257609   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:29.257632   47677 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:29.257647   47677 buildroot.go:174] setting up certificates
	I1207 21:07:29.257657   47677 provision.go:83] configureAuth start
	I1207 21:07:29.257665   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:07:29.257954   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:29.260827   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.261273   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.261299   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.261581   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.263670   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.264076   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.264109   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.264234   47677 provision.go:138] copyHostCerts
	I1207 21:07:29.264302   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:29.264314   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:29.264384   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:29.264493   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:29.264507   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:29.264541   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:29.264621   47677 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:29.264633   47677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:29.264664   47677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:29.264736   47677 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:07:29.438372   47677 provision.go:172] copyRemoteCerts
	I1207 21:07:29.438436   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:29.438458   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.441383   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.441847   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.441895   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.443278   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.443489   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.443663   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.443813   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:29.529376   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:29.555868   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:07:29.579234   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:07:29.602071   47677 provision.go:86] duration metric: configureAuth took 344.401753ms
	I1207 21:07:29.602101   47677 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:29.602274   47677 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:07:29.602343   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.604813   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.605209   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.605236   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.605414   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.605613   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.605771   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.605907   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.606059   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:29.606384   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:29.606418   47677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:30.167457   47885 start.go:369] acquired machines lock for "pause-763966" in 48.299920182s
	I1207 21:07:30.167512   47885 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:07:30.167523   47885 fix.go:54] fixHost starting: 
	I1207 21:07:30.167890   47885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:30.167939   47885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:30.184020   47885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1207 21:07:30.184435   47885 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:30.184906   47885 main.go:141] libmachine: Using API Version  1
	I1207 21:07:30.184935   47885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:30.185309   47885 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:30.185514   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.185686   47885 main.go:141] libmachine: (pause-763966) Calling .GetState
	I1207 21:07:30.187354   47885 fix.go:102] recreateIfNeeded on pause-763966: state=Running err=<nil>
	W1207 21:07:30.187390   47885 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:07:30.189755   47885 out.go:177] * Updating the running kvm2 "pause-763966" VM ...
	I1207 21:07:29.918574   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:29.918602   47677 main.go:141] libmachine: Checking connection to Docker...
	I1207 21:07:29.918613   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetURL
	I1207 21:07:29.919981   47677 main.go:141] libmachine: (no-preload-950431) DBG | Using libvirt version 6000000
	I1207 21:07:29.922407   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.922737   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.922777   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.922894   47677 main.go:141] libmachine: Docker is up and running!
	I1207 21:07:29.922914   47677 main.go:141] libmachine: Reticulating splines...
	I1207 21:07:29.922922   47677 client.go:171] LocalClient.Create took 28.302122475s
	I1207 21:07:29.922945   47677 start.go:167] duration metric: libmachine.API.Create for "no-preload-950431" took 28.302190846s
	I1207 21:07:29.922964   47677 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:07:29.922978   47677 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:29.922995   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:29.923268   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:29.923289   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:29.925456   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.925795   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:29.925834   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:29.925997   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:29.926164   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:29.926312   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:29.926438   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.011437   47677 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:30.015959   47677 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:07:30.015985   47677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:30.016052   47677 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:30.016170   47677 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:30.016275   47677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:30.024420   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:30.047964   47677 start.go:303] post-start completed in 124.984366ms
	I1207 21:07:30.048018   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:07:30.048571   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:30.051216   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.051566   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.051597   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.051813   47677 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:07:30.052040   47677 start.go:128] duration metric: createHost completed in 28.45337169s
	I1207 21:07:30.052068   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.054374   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.054599   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.054621   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.054722   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.054890   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.055085   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.055211   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.055343   47677 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.055655   47677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:07:30.055672   47677 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:07:30.167307   47677 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983250.155503456
	
	I1207 21:07:30.167331   47677 fix.go:206] guest clock: 1701983250.155503456
	I1207 21:07:30.167338   47677 fix.go:219] Guest: 2023-12-07 21:07:30.155503456 +0000 UTC Remote: 2023-12-07 21:07:30.052054396 +0000 UTC m=+75.310239283 (delta=103.44906ms)
	I1207 21:07:30.167375   47677 fix.go:190] guest clock delta is within tolerance: 103.44906ms
	I1207 21:07:30.167379   47677 start.go:83] releasing machines lock for "no-preload-950431", held for 28.568876733s
	I1207 21:07:30.167411   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.167744   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:30.170601   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.171006   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.171039   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.171165   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171686   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171883   47677 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:07:30.171968   47677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:30.172009   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.172086   47677 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:30.172110   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:07:30.174582   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.174899   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.174925   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175056   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175110   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.175286   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.175446   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.175470   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:30.175502   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:30.175587   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.175659   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:07:30.175796   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:07:30.175957   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:07:30.176093   47677 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:07:30.258701   47677 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:30.283727   47677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:30.440875   47677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:30.447387   47677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:30.447459   47677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:30.462448   47677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:07:30.462470   47677 start.go:475] detecting cgroup driver to use...
	I1207 21:07:30.462550   47677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:30.477803   47677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:30.489963   47677 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:30.490019   47677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:30.502404   47677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:30.515339   47677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:07:30.628615   47677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:30.767827   47677 docker.go:219] disabling docker service ...
	I1207 21:07:30.767885   47677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:30.784387   47677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:30.800947   47677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:30.905850   47677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:31.010460   47677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:31.024659   47677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:31.044132   47677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:07:31.044186   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.055525   47677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:07:31.055604   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.067056   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.078086   47677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:31.089451   47677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:07:31.101580   47677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:07:31.111926   47677 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:07:31.112000   47677 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:07:31.127475   47677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:07:31.138087   47677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:07:31.247704   47677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:07:31.419666   47677 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:07:31.419750   47677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:07:31.424475   47677 start.go:543] Will wait 60s for crictl version
	I1207 21:07:31.424528   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:31.428179   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:07:31.467873   47677 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:07:31.467947   47677 ssh_runner.go:195] Run: crio --version
	I1207 21:07:31.513224   47677 ssh_runner.go:195] Run: crio --version
	I1207 21:07:31.566059   47677 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:07:30.191196   47885 machine.go:88] provisioning docker machine ...
	I1207 21:07:30.191218   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:30.191434   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191591   47885 buildroot.go:166] provisioning hostname "pause-763966"
	I1207 21:07:30.191615   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.191775   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.194611   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195060   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.195087   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.195229   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.195414   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195577   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.195700   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.195847   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.196172   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.196186   47885 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-763966 && echo "pause-763966" | sudo tee /etc/hostname
	I1207 21:07:30.339851   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-763966
	
	I1207 21:07:30.339883   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.342876   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.343366   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.343576   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.343772   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.343982   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.344187   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.344380   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.344864   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.344891   47885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763966/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:07:30.463538   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:07:30.463567   47885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:07:30.463609   47885 buildroot.go:174] setting up certificates
	I1207 21:07:30.463619   47885 provision.go:83] configureAuth start
	I1207 21:07:30.463632   47885 main.go:141] libmachine: (pause-763966) Calling .GetMachineName
	I1207 21:07:30.463881   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:30.466509   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.466835   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.466865   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.467040   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.469115   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469452   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.469481   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.469647   47885 provision.go:138] copyHostCerts
	I1207 21:07:30.469711   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:07:30.469721   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:07:30.469771   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:07:30.469843   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:07:30.469851   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:07:30.469874   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:07:30.469930   47885 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:07:30.469944   47885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:07:30.469968   47885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:07:30.470050   47885 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.pause-763966 san=[192.168.39.237 192.168.39.237 localhost 127.0.0.1 minikube pause-763966]
	I1207 21:07:30.624834   47885 provision.go:172] copyRemoteCerts
	I1207 21:07:30.624904   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:07:30.624932   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.627807   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628175   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.628216   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.628466   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.628663   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.628852   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.629015   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:30.721413   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:07:30.750553   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1207 21:07:30.776230   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:07:30.808007   47885 provision.go:86] duration metric: configureAuth took 344.374986ms
	I1207 21:07:30.808031   47885 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:07:30.808223   47885 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:07:30.808312   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:30.811071   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811380   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:30.811415   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:30.811554   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:30.811747   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.811950   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:30.812083   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:30.812250   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:30.812583   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:30.812600   47885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:07:29.194325   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:31.691847   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:33.691940   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:31.567476   47677 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:07:31.570153   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:31.570440   47677 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:07:18 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:07:31.570470   47677 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:07:31.570582   47677 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:07:31.574572   47677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:07:31.586889   47677 localpath.go:92] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.crt -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt
	I1207 21:07:31.587012   47677 localpath.go:117] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.key -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:07:31.587105   47677 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:07:31.587135   47677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:31.619383   47677 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:07:31.619411   47677 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:07:31.619462   47677 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:31.619486   47677 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.619521   47677 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:31.619535   47677 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.619586   47677 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.619639   47677 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.619592   47677 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.619610   47677 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:07:31.620572   47677 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.620548   47677 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.620614   47677 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.620617   47677 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:31.620619   47677 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:31.620551   47677 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.835253   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.869555   47677 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:07:31.869606   47677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.869660   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:31.873415   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:07:31.886860   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:07:31.893746   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:31.897844   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:31.898663   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:31.899840   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:31.913889   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:07:31.913995   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:31.941645   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.014267   47677 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1207 21:07:32.014311   47677 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1207 21:07:32.014358   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052690   47677 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:07:32.052728   47677 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:07:32.052727   47677 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:07:32.052786   47677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:32.052805   47677 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:07:32.052836   47677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:32.052849   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052732   47677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:32.052889   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052747   47677 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:32.052909   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.052929   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (28359680 bytes)
	I1207 21:07:32.052931   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.052897   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.062700   47677 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:07:32.062739   47677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.062740   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1207 21:07:32.062776   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:32.075658   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:07:32.075754   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:07:32.075791   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:07:32.075833   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:07:32.236529   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:07:32.236577   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:32.236634   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:07:32.236658   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:32.236724   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:32.236636   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:07:32.240172   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1207 21:07:32.240228   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:32.240247   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.240278   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:07:32.240308   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:32.263484   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I1207 21:07:32.263514   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I1207 21:07:32.284181   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.10-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.10-0': No such file or directory
	I1207 21:07:32.284243   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.284275   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (35069952 bytes)
	I1207 21:07:32.284271   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 --> /var/lib/minikube/images/etcd_3.5.10-0 (56657408 bytes)
	I1207 21:07:32.334812   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:32.334871   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.334883   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I1207 21:07:32.334899   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (18522624 bytes)
	I1207 21:07:32.334912   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I1207 21:07:32.334914   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:32.402053   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1': No such file or directory
	I1207 21:07:32.402085   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (33436672 bytes)
	I1207 21:07:32.458364   47677 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.458438   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I1207 21:07:32.502623   47677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:33.278051   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I1207 21:07:33.278102   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:33.278153   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:07:33.278158   47677 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:07:33.278204   47677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:33.278252   47677 ssh_runner.go:195] Run: which crictl
	I1207 21:07:37.686863   48213 start.go:369] acquired machines lock for "embed-certs-598346" in 33.450781604s
	I1207 21:07:37.686928   48213 start.go:93] Provisioning new machine with config: &{Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:07:37.687076   48213 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 21:07:35.694572   46932 pod_ready.go:102] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"False"
	I1207 21:07:37.238382   46932 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:07:37.238411   46932 pod_ready.go:81] duration metric: took 41.566346071s waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.238424   46932 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.246659   46932 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:07:37.246687   46932 pod_ready.go:81] duration metric: took 8.2552ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:07:37.246698   46932 pod_ready.go:38] duration metric: took 41.579183632s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:07:37.246716   46932 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:07:37.246769   46932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:07:37.262341   46932 api_server.go:72] duration metric: took 41.906320635s to wait for apiserver process to appear ...
	I1207 21:07:37.262368   46932 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:07:37.262386   46932 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:07:37.270080   46932 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:07:37.271209   46932 api_server.go:141] control plane version: v1.16.0
	I1207 21:07:37.271233   46932 api_server.go:131] duration metric: took 8.858171ms to wait for apiserver health ...
	I1207 21:07:37.271244   46932 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:07:37.275404   46932 system_pods.go:59] 3 kube-system pods found
	I1207 21:07:37.275438   46932 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.275446   46932 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.275453   46932 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.275461   46932 system_pods.go:74] duration metric: took 4.210053ms to wait for pod list to return data ...
	I1207 21:07:37.275469   46932 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:07:37.278487   46932 default_sa.go:45] found service account: "default"
	I1207 21:07:37.278513   46932 default_sa.go:55] duration metric: took 3.038168ms for default service account to be created ...
	I1207 21:07:37.278520   46932 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:07:37.284484   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.284521   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.284530   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.284536   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.284555   46932 retry.go:31] will retry after 296.348393ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.585710   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.585742   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.585750   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.585756   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.585775   46932 retry.go:31] will retry after 323.000686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.913762   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:37.913793   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:37.913800   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:37.913805   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:37.913822   46932 retry.go:31] will retry after 382.501661ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:38.306840   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:38.306874   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:38.306882   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:38.306888   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:38.306904   46932 retry.go:31] will retry after 413.279764ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:37.689202   48213 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 21:07:37.689404   48213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:07:37.689454   48213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:07:37.706261   48213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I1207 21:07:37.706731   48213 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:07:37.707372   48213 main.go:141] libmachine: Using API Version  1
	I1207 21:07:37.707395   48213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:07:37.707735   48213 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:07:37.707925   48213 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:07:37.708081   48213 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:07:37.708337   48213 start.go:159] libmachine.API.Create for "embed-certs-598346" (driver="kvm2")
	I1207 21:07:37.708369   48213 client.go:168] LocalClient.Create starting
	I1207 21:07:37.708405   48213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:07:37.708440   48213 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:37.708471   48213 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:37.708540   48213 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:07:37.708565   48213 main.go:141] libmachine: Decoding PEM data...
	I1207 21:07:37.708585   48213 main.go:141] libmachine: Parsing certificate...
	I1207 21:07:37.708611   48213 main.go:141] libmachine: Running pre-create checks...
	I1207 21:07:37.708624   48213 main.go:141] libmachine: (embed-certs-598346) Calling .PreCreateCheck
	I1207 21:07:37.709037   48213 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:07:37.709479   48213 main.go:141] libmachine: Creating machine...
	I1207 21:07:37.709495   48213 main.go:141] libmachine: (embed-certs-598346) Calling .Create
	I1207 21:07:37.709630   48213 main.go:141] libmachine: (embed-certs-598346) Creating KVM machine...
	I1207 21:07:37.710891   48213 main.go:141] libmachine: (embed-certs-598346) DBG | found existing default KVM network
	I1207 21:07:37.712266   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.712086   48402 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:ad:58} reservation:<nil>}
	I1207 21:07:37.713492   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.713415   48402 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:c6:79} reservation:<nil>}
	I1207 21:07:37.714600   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.714510   48402 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:c3:ba} reservation:<nil>}
	I1207 21:07:37.716014   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.715933   48402 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000387150}
	I1207 21:07:37.722306   48213 main.go:141] libmachine: (embed-certs-598346) DBG | trying to create private KVM network mk-embed-certs-598346 192.168.72.0/24...
	I1207 21:07:37.814681   48213 main.go:141] libmachine: (embed-certs-598346) DBG | private KVM network mk-embed-certs-598346 192.168.72.0/24 created
	I1207 21:07:37.814731   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:37.814634   48402 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:37.814754   48213 main.go:141] libmachine: (embed-certs-598346) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 ...
	I1207 21:07:37.814767   48213 main.go:141] libmachine: (embed-certs-598346) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:07:37.814790   48213 main.go:141] libmachine: (embed-certs-598346) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:07:38.054086   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.053910   48402 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa...
	I1207 21:07:38.281828   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.281679   48402 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/embed-certs-598346.rawdisk...
	I1207 21:07:38.281861   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Writing magic tar header
	I1207 21:07:38.281897   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Writing SSH key tar header
	I1207 21:07:38.282428   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:38.282343   48402 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 ...
	I1207 21:07:38.282539   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346
	I1207 21:07:38.282574   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:07:38.282602   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:07:38.282623   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:07:38.282638   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:07:38.282655   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:07:38.282668   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Checking permissions on dir: /home
	I1207 21:07:38.282684   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346 (perms=drwx------)
	I1207 21:07:38.282702   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:07:38.282718   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:07:38.282735   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:07:38.282753   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:07:38.282774   48213 main.go:141] libmachine: (embed-certs-598346) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:07:38.282783   48213 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:07:38.282796   48213 main.go:141] libmachine: (embed-certs-598346) DBG | Skipping /home - not owner
	I1207 21:07:38.283785   48213 main.go:141] libmachine: (embed-certs-598346) define libvirt domain using xml: 
	I1207 21:07:38.283807   48213 main.go:141] libmachine: (embed-certs-598346) <domain type='kvm'>
	I1207 21:07:38.283818   48213 main.go:141] libmachine: (embed-certs-598346)   <name>embed-certs-598346</name>
	I1207 21:07:38.283843   48213 main.go:141] libmachine: (embed-certs-598346)   <memory unit='MiB'>2200</memory>
	I1207 21:07:38.283859   48213 main.go:141] libmachine: (embed-certs-598346)   <vcpu>2</vcpu>
	I1207 21:07:38.283877   48213 main.go:141] libmachine: (embed-certs-598346)   <features>
	I1207 21:07:38.283892   48213 main.go:141] libmachine: (embed-certs-598346)     <acpi/>
	I1207 21:07:38.283904   48213 main.go:141] libmachine: (embed-certs-598346)     <apic/>
	I1207 21:07:38.283943   48213 main.go:141] libmachine: (embed-certs-598346)     <pae/>
	I1207 21:07:38.283966   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.283986   48213 main.go:141] libmachine: (embed-certs-598346)   </features>
	I1207 21:07:38.284004   48213 main.go:141] libmachine: (embed-certs-598346)   <cpu mode='host-passthrough'>
	I1207 21:07:38.284033   48213 main.go:141] libmachine: (embed-certs-598346)   
	I1207 21:07:38.284053   48213 main.go:141] libmachine: (embed-certs-598346)   </cpu>
	I1207 21:07:38.284067   48213 main.go:141] libmachine: (embed-certs-598346)   <os>
	I1207 21:07:38.284079   48213 main.go:141] libmachine: (embed-certs-598346)     <type>hvm</type>
	I1207 21:07:38.284092   48213 main.go:141] libmachine: (embed-certs-598346)     <boot dev='cdrom'/>
	I1207 21:07:38.284103   48213 main.go:141] libmachine: (embed-certs-598346)     <boot dev='hd'/>
	I1207 21:07:38.284117   48213 main.go:141] libmachine: (embed-certs-598346)     <bootmenu enable='no'/>
	I1207 21:07:38.284128   48213 main.go:141] libmachine: (embed-certs-598346)   </os>
	I1207 21:07:38.284158   48213 main.go:141] libmachine: (embed-certs-598346)   <devices>
	I1207 21:07:38.284192   48213 main.go:141] libmachine: (embed-certs-598346)     <disk type='file' device='cdrom'>
	I1207 21:07:38.284226   48213 main.go:141] libmachine: (embed-certs-598346)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/boot2docker.iso'/>
	I1207 21:07:38.284240   48213 main.go:141] libmachine: (embed-certs-598346)       <target dev='hdc' bus='scsi'/>
	I1207 21:07:38.284263   48213 main.go:141] libmachine: (embed-certs-598346)       <readonly/>
	I1207 21:07:38.284276   48213 main.go:141] libmachine: (embed-certs-598346)     </disk>
	I1207 21:07:38.284291   48213 main.go:141] libmachine: (embed-certs-598346)     <disk type='file' device='disk'>
	I1207 21:07:38.284306   48213 main.go:141] libmachine: (embed-certs-598346)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:07:38.284326   48213 main.go:141] libmachine: (embed-certs-598346)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/embed-certs-598346.rawdisk'/>
	I1207 21:07:38.284338   48213 main.go:141] libmachine: (embed-certs-598346)       <target dev='hda' bus='virtio'/>
	I1207 21:07:38.284351   48213 main.go:141] libmachine: (embed-certs-598346)     </disk>
	I1207 21:07:38.284365   48213 main.go:141] libmachine: (embed-certs-598346)     <interface type='network'>
	I1207 21:07:38.284379   48213 main.go:141] libmachine: (embed-certs-598346)       <source network='mk-embed-certs-598346'/>
	I1207 21:07:38.284397   48213 main.go:141] libmachine: (embed-certs-598346)       <model type='virtio'/>
	I1207 21:07:38.284408   48213 main.go:141] libmachine: (embed-certs-598346)     </interface>
	I1207 21:07:38.284417   48213 main.go:141] libmachine: (embed-certs-598346)     <interface type='network'>
	I1207 21:07:38.284437   48213 main.go:141] libmachine: (embed-certs-598346)       <source network='default'/>
	I1207 21:07:38.284468   48213 main.go:141] libmachine: (embed-certs-598346)       <model type='virtio'/>
	I1207 21:07:38.284482   48213 main.go:141] libmachine: (embed-certs-598346)     </interface>
	I1207 21:07:38.284494   48213 main.go:141] libmachine: (embed-certs-598346)     <serial type='pty'>
	I1207 21:07:38.284506   48213 main.go:141] libmachine: (embed-certs-598346)       <target port='0'/>
	I1207 21:07:38.284514   48213 main.go:141] libmachine: (embed-certs-598346)     </serial>
	I1207 21:07:38.284527   48213 main.go:141] libmachine: (embed-certs-598346)     <console type='pty'>
	I1207 21:07:38.284542   48213 main.go:141] libmachine: (embed-certs-598346)       <target type='serial' port='0'/>
	I1207 21:07:38.284566   48213 main.go:141] libmachine: (embed-certs-598346)     </console>
	I1207 21:07:38.284593   48213 main.go:141] libmachine: (embed-certs-598346)     <rng model='virtio'>
	I1207 21:07:38.284609   48213 main.go:141] libmachine: (embed-certs-598346)       <backend model='random'>/dev/random</backend>
	I1207 21:07:38.284628   48213 main.go:141] libmachine: (embed-certs-598346)     </rng>
	I1207 21:07:38.284645   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.284658   48213 main.go:141] libmachine: (embed-certs-598346)     
	I1207 21:07:38.284670   48213 main.go:141] libmachine: (embed-certs-598346)   </devices>
	I1207 21:07:38.284680   48213 main.go:141] libmachine: (embed-certs-598346) </domain>
	I1207 21:07:38.284692   48213 main.go:141] libmachine: (embed-certs-598346) 
	I1207 21:07:38.289472   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:c9:10:99 in network default
	I1207 21:07:38.290233   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:07:38.290261   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:38.291102   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:07:38.291517   48213 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:07:38.292241   48213 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:07:38.293138   48213 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:07:35.136377   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.858197818s)
	I1207 21:07:35.136411   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:07:35.136436   47677 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:35.136489   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:07:35.136408   47677 ssh_runner.go:235] Completed: which crictl: (1.858134043s)
	I1207 21:07:35.136603   47677 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:07:38.020856   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.884339389s)
	I1207 21:07:38.020877   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:07:38.020896   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:38.020944   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:07:38.020971   47677 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.884345585s)
	I1207 21:07:38.021039   47677 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:07:38.021133   47677 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:07:37.404458   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:07:37.404490   47885 machine.go:91] provisioned docker machine in 7.213270016s
	I1207 21:07:37.404503   47885 start.go:300] post-start starting for "pause-763966" (driver="kvm2")
	I1207 21:07:37.404515   47885 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:07:37.404540   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.404909   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:07:37.404940   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.407902   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408334   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.408368   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.408509   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.408711   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.408837   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.408970   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.521762   47885 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:07:37.526220   47885 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:07:37.526247   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:07:37.526308   47885 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:07:37.526416   47885 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:07:37.526541   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:07:37.539457   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:37.564052   47885 start.go:303] post-start completed in 159.537127ms
	I1207 21:07:37.564083   47885 fix.go:56] fixHost completed within 7.39656043s
	I1207 21:07:37.564102   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.567031   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567432   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.567462   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.567631   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.567849   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568032   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.568206   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.568384   47885 main.go:141] libmachine: Using SSH client type: native
	I1207 21:07:37.568686   47885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1207 21:07:37.568707   47885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:07:37.686682   47885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983257.682984288
	
	I1207 21:07:37.686707   47885 fix.go:206] guest clock: 1701983257.682984288
	I1207 21:07:37.686716   47885 fix.go:219] Guest: 2023-12-07 21:07:37.682984288 +0000 UTC Remote: 2023-12-07 21:07:37.564087197 +0000 UTC m=+55.882358893 (delta=118.897091ms)
	I1207 21:07:37.686771   47885 fix.go:190] guest clock delta is within tolerance: 118.897091ms
	I1207 21:07:37.686780   47885 start.go:83] releasing machines lock for "pause-763966", held for 7.51930022s
	I1207 21:07:37.686812   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.687086   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:37.689968   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690410   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.690448   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.690593   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691097   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691281   47885 main.go:141] libmachine: (pause-763966) Calling .DriverName
	I1207 21:07:37.691389   47885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:07:37.691429   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.691532   47885 ssh_runner.go:195] Run: cat /version.json
	I1207 21:07:37.691558   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHHostname
	I1207 21:07:37.694652   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.694973   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695096   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695128   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695319   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695451   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:37.695488   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:37.695541   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.695756   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.695922   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHPort
	I1207 21:07:37.695930   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.696478   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHKeyPath
	I1207 21:07:37.696672   47885 main.go:141] libmachine: (pause-763966) Calling .GetSSHUsername
	I1207 21:07:37.696848   47885 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/pause-763966/id_rsa Username:docker}
	I1207 21:07:37.824802   47885 ssh_runner.go:195] Run: systemctl --version
	I1207 21:07:37.833573   47885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:07:37.992042   47885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:07:37.998690   47885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:07:37.998764   47885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:07:38.008789   47885 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 21:07:38.008817   47885 start.go:475] detecting cgroup driver to use...
	I1207 21:07:38.008903   47885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:07:38.029726   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:07:38.045392   47885 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:07:38.045453   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:07:38.061788   47885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:07:38.077501   47885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:07:38.230276   47885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:07:38.929441   47885 docker.go:219] disabling docker service ...
	I1207 21:07:38.929533   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:07:38.972952   47885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:07:39.000065   47885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:07:39.365500   47885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:07:39.657590   47885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:07:39.734261   47885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:07:39.833606   47885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:07:39.833681   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.870335   47885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:07:39.870417   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.901831   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.928228   47885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:07:39.952330   47885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:07:39.972481   47885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:07:39.987141   47885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:07:40.003730   47885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:07:40.274754   47885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:07:41.974747   47885 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.699950937s)
	I1207 21:07:41.974779   47885 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:07:41.974832   47885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:07:41.981723   47885 start.go:543] Will wait 60s for crictl version
	I1207 21:07:41.981786   47885 ssh_runner.go:195] Run: which crictl
	I1207 21:07:41.987013   47885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:07:42.050779   47885 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:07:42.050904   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.110899   47885 ssh_runner.go:195] Run: crio --version
	I1207 21:07:42.164304   47885 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:07:38.725495   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:38.725529   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:38.725538   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:38.725546   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:38.725569   46932 retry.go:31] will retry after 460.079146ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.191293   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:39.191323   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:39.191331   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:39.191338   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:39.191354   46932 retry.go:31] will retry after 654.217973ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.851451   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:39.851552   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:39.851566   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:39.851572   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:39.851588   46932 retry.go:31] will retry after 955.752241ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:40.812025   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:40.812059   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:40.812067   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:40.812073   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:40.812091   46932 retry.go:31] will retry after 1.045207444s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:41.863772   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:41.863810   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:41.863818   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:41.863825   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:41.863844   46932 retry.go:31] will retry after 1.532062886s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:43.400344   46932 system_pods.go:86] 3 kube-system pods found
	I1207 21:07:43.400380   46932 system_pods.go:89] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:07:43.400389   46932 system_pods.go:89] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:07:43.400395   46932 system_pods.go:89] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:07:43.400414   46932 retry.go:31] will retry after 1.410839946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:07:39.829535   48213 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:07:39.830545   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:39.831044   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:39.831070   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:39.831017   48402 retry.go:31] will retry after 230.292105ms: waiting for machine to come up
	I1207 21:07:40.063716   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.064449   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.064481   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.064359   48402 retry.go:31] will retry after 329.840952ms: waiting for machine to come up
	I1207 21:07:40.396107   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.396746   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.396775   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.396658   48402 retry.go:31] will retry after 455.324621ms: waiting for machine to come up
	I1207 21:07:40.854129   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:40.854604   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:40.854629   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:40.854550   48402 retry.go:31] will retry after 580.382717ms: waiting for machine to come up
	I1207 21:07:41.436363   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:41.436926   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:41.436952   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:41.436875   48402 retry.go:31] will retry after 695.594858ms: waiting for machine to come up
	I1207 21:07:42.134414   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:42.135037   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:42.135069   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:42.134968   48402 retry.go:31] will retry after 822.431255ms: waiting for machine to come up
	I1207 21:07:42.959753   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:42.960319   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:42.960350   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:42.960278   48402 retry.go:31] will retry after 954.543188ms: waiting for machine to come up
	I1207 21:07:43.916120   48213 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:07:43.916542   48213 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:07:43.916587   48213 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:07:43.916526   48402 retry.go:31] will retry after 1.10388154s: waiting for machine to come up
	I1207 21:07:40.305581   47677 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.284418443s)
	I1207 21:07:40.305617   47677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1207 21:07:40.305646   47677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1207 21:07:40.305654   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.284686205s)
	I1207 21:07:40.305678   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:07:40.305712   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:40.305755   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:07:43.426529   47677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.120745681s)
	I1207 21:07:43.426561   47677 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:07:43.426593   47677 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:43.426641   47677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:07:42.165952   47885 main.go:141] libmachine: (pause-763966) Calling .GetIP
	I1207 21:07:42.169388   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.169815   47885 main.go:141] libmachine: (pause-763966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:cb:ce", ip: ""} in network mk-pause-763966: {Iface:virbr1 ExpiryTime:2023-12-07 22:05:16 +0000 UTC Type:0 Mac:52:54:00:19:cb:ce Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:pause-763966 Clientid:01:52:54:00:19:cb:ce}
	I1207 21:07:42.169842   47885 main.go:141] libmachine: (pause-763966) DBG | domain pause-763966 has defined IP address 192.168.39.237 and MAC address 52:54:00:19:cb:ce in network mk-pause-763966
	I1207 21:07:42.170126   47885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:07:42.175657   47885 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:07:42.175717   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.234910   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.234943   47885 crio.go:415] Images already preloaded, skipping extraction
	I1207 21:07:42.235020   47885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:07:42.278372   47885 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:07:42.278396   47885 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:07:42.278517   47885 ssh_runner.go:195] Run: crio config
	I1207 21:07:42.444519   47885 cni.go:84] Creating CNI manager for ""
	I1207 21:07:42.444554   47885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:07:42.444586   47885 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:07:42.444620   47885 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763966 NodeName:pause-763966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:07:42.444881   47885 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763966"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:07:42.445014   47885 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-763966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:07:42.445086   47885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:07:42.467395   47885 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:07:42.467487   47885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:07:42.511628   47885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 21:07:42.544396   47885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:07:42.591065   47885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1207 21:07:42.789598   47885 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I1207 21:07:42.825431   47885 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966 for IP: 192.168.39.237
	I1207 21:07:42.825474   47885 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:07:42.825656   47885 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:07:42.825713   47885 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:07:42.825819   47885 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/client.key
	I1207 21:07:42.825914   47885 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key.cf509944
	I1207 21:07:42.825992   47885 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key
	I1207 21:07:42.826146   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:07:42.826189   47885 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:07:42.826207   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:07:42.826244   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:07:42.826287   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:07:42.826320   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:07:42.826383   47885 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:07:42.827247   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:07:42.902388   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:07:42.970133   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:07:43.015938   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/pause-763966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:07:43.058300   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:07:43.137443   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:07:43.189174   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:07:43.240886   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:07:43.296335   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:07:43.350271   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:07:43.412610   47885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:07:43.475454   47885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:07:43.526819   47885 ssh_runner.go:195] Run: openssl version
	I1207 21:07:43.546515   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:07:43.561720   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570116   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.570205   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:07:43.577494   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:07:43.587448   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:07:43.598484   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604317   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.604420   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:07:43.611072   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:07:43.621498   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:07:43.636404   47885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645084   47885 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.645165   47885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:07:43.657188   47885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:07:43.672912   47885 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:07:43.681666   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:07:43.694094   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:07:43.705788   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:07:43.719218   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:07:43.732112   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:07:43.744493   47885 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:07:43.764423   47885 kubeadm.go:404] StartCluster: {Name:pause-763966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-763966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:07:43.764573   47885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:07:43.764656   47885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:07:43.852222   47885 cri.go:89] found id: "a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022"
	I1207 21:07:43.852249   47885 cri.go:89] found id: "d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d"
	I1207 21:07:43.852259   47885 cri.go:89] found id: "085182fb95992bc23ed02f0be641f942c2f7195cdbc192e5d86f5c2e89beff27"
	I1207 21:07:43.852265   47885 cri.go:89] found id: "37d089b9fc205ebc244d160915340f06e87b5e3b59b75f3b316fb5e333bc21a6"
	I1207 21:07:43.852270   47885 cri.go:89] found id: "3eb4483e3db6fd79059095509f2360ce563cf446b08f2091f8add3d6aa59bd6b"
	I1207 21:07:43.852276   47885 cri.go:89] found id: "531a6b1cf0597b055a9600ccccdc9633c3470679ae44e383bdf594a3f7bb16b7"
	I1207 21:07:43.852282   47885 cri.go:89] found id: ""
	I1207 21:07:43.852335   47885 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:05:13 UTC, ends at Thu 2023-12-07 21:08:27 UTC. --
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.881012646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f24e0a30-2a57-4979-a0bb-d13594824747 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.881635386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f24e0a30-2a57-4979-a0bb-d13594824747 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.940110145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ed1bdbe8-9dca-46db-bf99-59e89f293602 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.940226369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ed1bdbe8-9dca-46db-bf99-59e89f293602 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.941998168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0eb90f82-1bb2-4571-8647-f67d9ed80a7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.942503167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983306942390182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=0eb90f82-1bb2-4571-8647-f67d9ed80a7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.944223166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d1e12d63-ca21-4681-9bf2-8a435dea4f79 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.944326945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d1e12d63-ca21-4681-9bf2-8a435dea4f79 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:26 pause-763966 crio[2627]: time="2023-12-07 21:08:26.944751077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d1e12d63-ca21-4681-9bf2-8a435dea4f79 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:26.998104087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=080d4584-3bf7-47f2-a191-75969f517747 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:26.998236537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=080d4584-3bf7-47f2-a191-75969f517747 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.002105293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=871d883a-1cd5-41d5-9709-3be26e92fad2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.002544713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983307002524727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=871d883a-1cd5-41d5-9709-3be26e92fad2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.006633868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f1b88d16-bf3c-44c9-a9d9-23b749e6638a name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.007118706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f1b88d16-bf3c-44c9-a9d9-23b749e6638a name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.008127897Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=810d684f-f8e3-4c81-8579-50e364f5e1cb name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.008310998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=810d684f-f8e3-4c81-8579-50e364f5e1cb name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.008868141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f1b88d16-bf3c-44c9-a9d9-23b749e6638a name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.052255326Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1abc141e-a946-4078-a3ac-445ab0bfbbe6 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.052319657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1abc141e-a946-4078-a3ac-445ab0bfbbe6 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.053906194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=52591795-e9a7-4d5f-8f4e-cbff15e1e758 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.054275341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701983307054260758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=52591795-e9a7-4d5f-8f4e-cbff15e1e758 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.054833386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8750ef43-155b-465a-8a2b-a139d30c14e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.054881915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8750ef43-155b-465a-8a2b-a139d30c14e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:08:27 pause-763966 crio[2627]: time="2023-12-07 21:08:27.055116978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983287454897198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983287450904396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: df9782b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983281881993560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983281861353133,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string
{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0,PodSandboxId:85990b990dc87995a8dfbd15d19e31173a62b9112d9d3088cf095d9a2eb79c7d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983281802925054,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2916073a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c,PodSandboxId:8ad986722a0e9184b3f8541dcbcbb80a47765ba39399db9d884aa3164712f234,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983281832231484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f50d6336748bfc2b65d297450d2de,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447,PodSandboxId:c4bc3275f1d15b143a611553b9679e1bd6eb3e12f6b3fe24039fed09d60b6335,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701983265645191956,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w976v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4ba2f0-5660-4044-9f09-2af3a79c8599,},Annotations:map[string]string{io.kubernetes.container.hash: d
f9782b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f,PodSandboxId:e9cb63e116f1dad5439e37af9f056844b386ff966832de9676133996683a01a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701983264609239912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6llq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0336a5ef-6d08-4058-acfe-4ec206ae8c93,},Annotations:map[string]string{io.kubernetes.container.hash: 2d69782f,io.kubernetes.containe
r.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb,PodSandboxId:692d53fd5068b12e416287a494127ccbb0bba5f4c74a84bac409e995021bf9d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1701983264178020573,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-763966,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d55fbd355d72e17a89f8ce660751049,},Annotations:map[string]string{io.kubernetes.container.hash: 9dd3f3e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868,PodSandboxId:9d5c01a53cb5ff3adb49f4cf39b784f8fd160825eb304571d418ea720b9744c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701983264152804039,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-763966,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: a7570e2f498d4c2bdb38c3f8f4f2acb8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022,PodSandboxId:3dfe206eeb05a6b0a0241c2e0ec2e75802ffa6d57ef08814c0fc6a8ef1d122ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701983259742609548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
3f50d6336748bfc2b65d297450d2de,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d,PodSandboxId:652c03a9919f782932691dd53b4d4e9d2d022fac02a6e80365f8d42a6bb8d8e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701983259637589298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-763966,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e35fa560ea6cdcfebf021df26e28d3,},Annotations:map[string
]string{io.kubernetes.container.hash: 2916073a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8750ef43-155b-465a-8a2b-a139d30c14e0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	311d35afa7adc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago      Running             coredns                   2                   e9cb63e116f1d       coredns-5dd5756b68-l6llq
	2c5c6617b826d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   19 seconds ago      Running             kube-proxy                2                   c4bc3275f1d15       kube-proxy-w976v
	16a1223e03355       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   692d53fd5068b       etcd-pause-763966
	284e513959658       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   25 seconds ago      Running             kube-scheduler            2                   9d5c01a53cb5f       kube-scheduler-pause-763966
	d36b913d5fa93       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   25 seconds ago      Running             kube-controller-manager   2                   8ad986722a0e9       kube-controller-manager-pause-763966
	877f3c78fa25d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   25 seconds ago      Running             kube-apiserver            2                   85990b990dc87       kube-apiserver-pause-763966
	f03715579d42e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   41 seconds ago      Exited              kube-proxy                1                   c4bc3275f1d15       kube-proxy-w976v
	fcedf568f2752       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   42 seconds ago      Exited              coredns                   1                   e9cb63e116f1d       coredns-5dd5756b68-l6llq
	486140b51e777       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   43 seconds ago      Exited              etcd                      1                   692d53fd5068b       etcd-pause-763966
	200422fadb373       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   43 seconds ago      Exited              kube-scheduler            1                   9d5c01a53cb5f       kube-scheduler-pause-763966
	a3701acc6ea51       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   47 seconds ago      Exited              kube-controller-manager   1                   3dfe206eeb05a       kube-controller-manager-pause-763966
	d538927394a7e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   47 seconds ago      Exited              kube-apiserver            1                   652c03a9919f7       kube-apiserver-pause-763966
	
	* 
	* ==> coredns [311d35afa7adc6d1d9942b5aec21f92190454da644eaf6f4e7910acd7f2a093b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40587 - 42611 "HINFO IN 8037292115368977335.5329363017436627300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019716054s
	
	* 
	* ==> coredns [fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58351 - 33422 "HINFO IN 5220376274418374812.7681633137589353701. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01942101s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-763966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-763966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=pause-763966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_05_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-763966
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:08:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:08:06 +0000   Thu, 07 Dec 2023 21:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    pause-763966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 aea938af658948548c6c83be99b33cd4
	  System UUID:                aea938af-6589-4854-8c6c-83be99b33cd4
	  Boot ID:                    437d9fe2-13fe-4f5c-8a8d-ae272544b72e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-l6llq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m28s
	  kube-system                 etcd-pause-763966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-apiserver-pause-763966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-controller-manager-pause-763966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-proxy-w976v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-pause-763966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 19s                    kube-proxy       
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s                  kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s                  kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s                  kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m41s                  kubelet          Node pause-763966 status is now: NodeReady
	  Normal  RegisteredNode           2m30s                  node-controller  Node pause-763966 event: Registered Node pause-763966 in Controller
	  Normal  Starting                 26s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)      kubelet          Node pause-763966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)      kubelet          Node pause-763966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)      kubelet          Node pause-763966 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node pause-763966 event: Registered Node pause-763966 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073238] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.826361] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.871270] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.178240] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.208495] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.016878] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.128570] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.149674] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.126621] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.233538] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +10.056516] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +8.760043] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[Dec 7 21:06] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 7 21:07] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.592943] systemd-fstab-generator[2375]: Ignoring "noauto" for root device
	[  +0.406459] systemd-fstab-generator[2423]: Ignoring "noauto" for root device
	[  +0.324206] systemd-fstab-generator[2434]: Ignoring "noauto" for root device
	[  +0.640938] systemd-fstab-generator[2516]: Ignoring "noauto" for root device
	[Dec 7 21:08] systemd-fstab-generator[3511]: Ignoring "noauto" for root device
	[  +7.036151] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [16a1223e03355b6739ac1f97e8d18a39a3efc6d93757ea2288ef7308bb21a8bc] <==
	* {"level":"warn","ts":"2023-12-07T21:08:17.618223Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:16.953258Z","time spent":"664.918695ms","remote":"127.0.0.1:36426","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:18.695042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.420117ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-07T21:08:18.695116Z","caller":"traceutil/trace.go:171","msg":"trace[1420117760] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:508; }","duration":"216.510083ms","start":"2023-12-07T21:08:18.478596Z","end":"2023-12-07T21:08:18.695106Z","steps":["trace[1420117760] 'range keys from in-memory index tree'  (duration: 216.391111ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:08:18.695501Z","caller":"traceutil/trace.go:171","msg":"trace[1755123161] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"514.495155ms","start":"2023-12-07T21:08:18.180898Z","end":"2023-12-07T21:08:18.695393Z","steps":["trace[1755123161] 'process raft request'  (duration: 513.814204ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:18.695581Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.180369Z","time spent":"515.162916ms","remote":"127.0.0.1:36374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.237\" mod_revision:454 > success:<request_put:<key:\"/registry/masterleases/192.168.39.237\" value_size:67 lease:6971163505960064215 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.237\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:19.113006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.760977ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16194535542814840027 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" mod_revision:450 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-07T21:08:19.113389Z","caller":"traceutil/trace.go:171","msg":"trace[510084002] linearizableReadLoop","detail":"{readStateIndex:561; appliedIndex:559; }","duration":"660.061762ms","start":"2023-12-07T21:08:18.453314Z","end":"2023-12-07T21:08:19.113376Z","steps":["trace[510084002] 'read index received'  (duration: 241.406765ms)","trace[510084002] 'applied index is now lower than readState.Index'  (duration: 418.653925ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:08:19.113563Z","caller":"traceutil/trace.go:171","msg":"trace[825534801] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"927.325591ms","start":"2023-12-07T21:08:18.186227Z","end":"2023-12-07T21:08:19.113552Z","steps":["trace[825534801] 'process raft request'  (duration: 799.965878ms)","trace[825534801] 'compare'  (duration: 126.673353ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:08:19.113993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.186211Z","time spent":"927.752137ms","remote":"127.0.0.1:36408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4376,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" mod_revision:450 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-763966\" > >"}
	{"level":"warn","ts":"2023-12-07T21:08:19.113631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"660.32649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" ","response":"range_response_count:1 size:6830"}
	{"level":"warn","ts":"2023-12-07T21:08:19.1139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.970456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"warn","ts":"2023-12-07T21:08:19.11393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.310078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-12-07T21:08:19.115003Z","caller":"traceutil/trace.go:171","msg":"trace[1251109391] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-763966; range_end:; response_count:1; response_revision:510; }","duration":"661.704004ms","start":"2023-12-07T21:08:18.453289Z","end":"2023-12-07T21:08:19.114993Z","steps":["trace[1251109391] 'agreement among raft nodes before linearized reading'  (duration: 660.289909ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.115159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.453277Z","time spent":"661.870339ms","remote":"127.0.0.1:36408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":6853,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" "}
	{"level":"info","ts":"2023-12-07T21:08:19.115053Z","caller":"traceutil/trace.go:171","msg":"trace[43502776] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:510; }","duration":"417.129018ms","start":"2023-12-07T21:08:18.697917Z","end":"2023-12-07T21:08:19.115046Z","steps":["trace[43502776] 'agreement among raft nodes before linearized reading'  (duration: 415.948301ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.115373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:08:18.697906Z","time spent":"417.459533ms","remote":"127.0.0.1:36404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":445,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"info","ts":"2023-12-07T21:08:19.115124Z","caller":"traceutil/trace.go:171","msg":"trace[915616064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:510; }","duration":"233.502996ms","start":"2023-12-07T21:08:18.881616Z","end":"2023-12-07T21:08:19.115119Z","steps":["trace[915616064] 'agreement among raft nodes before linearized reading'  (duration: 232.297715ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:08:19.385989Z","caller":"traceutil/trace.go:171","msg":"trace[1716342016] linearizableReadLoop","detail":"{readStateIndex:562; appliedIndex:561; }","duration":"260.25328ms","start":"2023-12-07T21:08:19.125713Z","end":"2023-12-07T21:08:19.385966Z","steps":["trace[1716342016] 'read index received'  (duration: 177.617536ms)","trace[1716342016] 'applied index is now lower than readState.Index'  (duration: 82.635154ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:08:19.386364Z","caller":"traceutil/trace.go:171","msg":"trace[532659987] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"261.014093ms","start":"2023-12-07T21:08:19.125337Z","end":"2023-12-07T21:08:19.386351Z","steps":["trace[532659987] 'process raft request'  (duration: 178.036714ms)","trace[532659987] 'compare'  (duration: 82.513469ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:08:19.386549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.814282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2023-12-07T21:08:19.386602Z","caller":"traceutil/trace.go:171","msg":"trace[259796995] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:511; }","duration":"257.861461ms","start":"2023-12-07T21:08:19.128732Z","end":"2023-12-07T21:08:19.386593Z","steps":["trace[259796995] 'agreement among raft nodes before linearized reading'  (duration: 257.8026ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.386465Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.67189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2023-12-07T21:08:19.386774Z","caller":"traceutil/trace.go:171","msg":"trace[780446668] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:511; }","duration":"261.069543ms","start":"2023-12-07T21:08:19.125696Z","end":"2023-12-07T21:08:19.386765Z","steps":["trace[780446668] 'agreement among raft nodes before linearized reading'  (duration: 260.572358ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:08:19.386525Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.504931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-763966\" ","response":"range_response_count:1 size:6830"}
	{"level":"info","ts":"2023-12-07T21:08:19.386911Z","caller":"traceutil/trace.go:171","msg":"trace[1102237510] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-763966; range_end:; response_count:1; response_revision:511; }","duration":"258.888032ms","start":"2023-12-07T21:08:19.128014Z","end":"2023-12-07T21:08:19.386902Z","steps":["trace[1102237510] 'agreement among raft nodes before linearized reading'  (duration: 258.483298ms)"],"step_count":1}
	
	* 
	* ==> etcd [486140b51e77711889ed6ef7f61897f6d58b0a3df15a1b02b40c922636892bfb] <==
	* {"level":"info","ts":"2023-12-07T21:07:45.347269Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T21:07:46.4045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.40467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.404749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgPreVoteResp from 3f0f97df8a50e0be at term 2"}
	{"level":"info","ts":"2023-12-07T21:07:46.404808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgVoteResp from 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became leader at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.404925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f0f97df8a50e0be elected leader 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-12-07T21:07:46.473864Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3f0f97df8a50e0be","local-member-attributes":"{Name:pause-763966 ClientURLs:[https://192.168.39.237:2379]}","request-path":"/0/members/3f0f97df8a50e0be/attributes","cluster-id":"db2c13b3d7f66f6a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:07:46.474143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:07:46.47598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:07:46.481847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.237:2379"}
	{"level":"info","ts":"2023-12-07T21:07:46.478394Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:07:46.485668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:07:46.485815Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:07:59.306232Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-07T21:07:59.306342Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-763966","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	{"level":"warn","ts":"2023-12-07T21:07:59.306645Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.3067Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.308359Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T21:07:59.308502Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.237:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-07T21:07:59.308642Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3f0f97df8a50e0be","current-leader-member-id":"3f0f97df8a50e0be"}
	{"level":"info","ts":"2023-12-07T21:07:59.312748Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-12-07T21:07:59.312906Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-12-07T21:07:59.312951Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-763966","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.237:2380"],"advertise-client-urls":["https://192.168.39.237:2379"]}
	
	* 
	* ==> kernel <==
	*  21:08:27 up 3 min,  0 users,  load average: 2.21, 0.86, 0.32
	Linux pause-763966 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [877f3c78fa25d75519189e55855e73592a2e6a56b8f5cfee02d78aedc0132db0] <==
	* I1207 21:08:08.800090       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 21:08:08.852095       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 21:08:08.888216       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 21:08:08.895031       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 21:08:17.619902       1 trace.go:236] Trace[1244975485]: "Get" accept:application/json, */*,audit-id:f378f76d-ccfa-4933-92ba-d5b4b9c07d91,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-763966,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Dec-2023 21:08:16.953) (total time: 666ms):
	Trace[1244975485]: ---"About to write a response" 665ms (21:08:17.619)
	Trace[1244975485]: [666.202561ms] [666.202561ms] END
	I1207 21:08:17.620143       1 trace.go:236] Trace[1268883288]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f528a569-af7f-423c-ad9e-8a1623164e1c,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-kese4b4tus3b6qiuxusitu3ex4,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (07-Dec-2023 21:08:16.951) (total time: 667ms):
	Trace[1268883288]: ["GuaranteedUpdate etcd3" audit-id:f528a569-af7f-423c-ad9e-8a1623164e1c,key:/leases/kube-system/apiserver-kese4b4tus3b6qiuxusitu3ex4,type:*coordination.Lease,resource:leases.coordination.k8s.io 667ms (21:08:16.951)
	Trace[1268883288]:  ---"Txn call completed" 666ms (21:08:17.619)]
	Trace[1268883288]: [667.990062ms] [667.990062ms] END
	I1207 21:08:18.696543       1 trace.go:236] Trace[1578833462]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.237,type:*v1.Endpoints,resource:apiServerIPInfo (07-Dec-2023 21:08:18.034) (total time: 662ms):
	Trace[1578833462]: ---"Transaction prepared" 144ms (21:08:18.180)
	Trace[1578833462]: ---"Txn call completed" 516ms (21:08:18.696)
	Trace[1578833462]: [662.163885ms] [662.163885ms] END
	I1207 21:08:19.117905       1 trace.go:236] Trace[1540491190]: "Get" accept:application/json, */*,audit-id:38097c31-cf7e-4bd5-af68-f239a8b200fc,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-763966,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Dec-2023 21:08:18.452) (total time: 665ms):
	Trace[1540491190]: ---"About to write a response" 663ms (21:08:19.115)
	Trace[1540491190]: [665.059274ms] [665.059274ms] END
	I1207 21:08:19.118193       1 trace.go:236] Trace[1483288823]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:729ff8c8-c917-4f10-be80-8ca3bac30aef,client:192.168.39.237,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-763966/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (07-Dec-2023 21:08:18.183) (total time: 935ms):
	Trace[1483288823]: ["GuaranteedUpdate etcd3" audit-id:729ff8c8-c917-4f10-be80-8ca3bac30aef,key:/pods/kube-system/kube-scheduler-pause-763966,type:*core.Pod,resource:pods 934ms (21:08:18.183)
	Trace[1483288823]:  ---"Txn call completed" 930ms (21:08:19.116)]
	Trace[1483288823]: ---"Object stored in database" 931ms (21:08:19.116)
	Trace[1483288823]: [935.037612ms] [935.037612ms] END
	I1207 21:08:19.591039       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 21:08:19.648343       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [d538927394a7e372abd2775a6963a497ce2d9bbdcbed2493dcf3cf3963c8888d] <==
	* 
	* 
	* ==> kube-controller-manager [a3701acc6ea51d83a4df84f18beb9cb89ce8857620b7671a4e48a0d8ff11b022] <==
	* 
	* 
	* ==> kube-controller-manager [d36b913d5fa93c03725b56b7a886180f34b6e79cba88218227920b5c5c188a0c] <==
	* I1207 21:08:19.602629       1 shared_informer.go:318] Caches are synced for attach detach
	I1207 21:08:19.602766       1 shared_informer.go:318] Caches are synced for PVC protection
	I1207 21:08:19.602809       1 shared_informer.go:318] Caches are synced for service account
	I1207 21:08:19.605267       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1207 21:08:19.605333       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1207 21:08:19.605284       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1207 21:08:19.609376       1 shared_informer.go:318] Caches are synced for TTL
	I1207 21:08:19.628266       1 shared_informer.go:318] Caches are synced for PV protection
	I1207 21:08:19.630584       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1207 21:08:19.633150       1 shared_informer.go:318] Caches are synced for GC
	I1207 21:08:19.692168       1 shared_informer.go:318] Caches are synced for daemon sets
	I1207 21:08:19.716661       1 shared_informer.go:318] Caches are synced for taint
	I1207 21:08:19.716935       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1207 21:08:19.717034       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1207 21:08:19.717096       1 taint_manager.go:210] "Sending events to api server"
	I1207 21:08:19.717130       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-763966"
	I1207 21:08:19.717210       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 21:08:19.717555       1 event.go:307] "Event occurred" object="pause-763966" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-763966 event: Registered Node pause-763966 in Controller"
	I1207 21:08:19.725762       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 21:08:19.752636       1 shared_informer.go:318] Caches are synced for deployment
	I1207 21:08:19.785617       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 21:08:19.808643       1 shared_informer.go:318] Caches are synced for disruption
	I1207 21:08:20.131218       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 21:08:20.131324       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 21:08:20.154473       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [2c5c6617b826def475d3fa2c178ff332e191388d1387175aadf0a351c5181d28] <==
	* I1207 21:08:07.638882       1 server_others.go:69] "Using iptables proxy"
	I1207 21:08:07.648245       1 node.go:141] Successfully retrieved node IP: 192.168.39.237
	I1207 21:08:07.685668       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:08:07.685724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:08:07.688524       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:08:07.688622       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:08:07.688861       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:08:07.688896       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:08:07.689997       1 config.go:188] "Starting service config controller"
	I1207 21:08:07.690062       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:08:07.690086       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:08:07.690118       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:08:07.690732       1 config.go:315] "Starting node config controller"
	I1207 21:08:07.690770       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:08:07.790796       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:08:07.790877       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:08:07.790894       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447] <==
	* I1207 21:07:46.121250       1 server_others.go:69] "Using iptables proxy"
	E1207 21:07:46.125210       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:47.190094       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:49.410295       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:53.685250       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763966": dial tcp 192.168.39.237:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [200422fadb3739c9c51d92e4e1c0afc57789b5c1f0ec12a5c3629c294275e868] <==
	* E1207 21:07:54.635258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.237:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:54.923990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.237:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:54.924060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.237:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.040944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.041097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.237:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.123091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.123184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.217220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.217382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.237:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.651008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.651128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.237:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:55.701751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:55.701887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:56.664101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:56.664218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:56.946586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:56.946720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.237:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	W1207 21:07:57.228009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:57.228135       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.237:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	E1207 21:07:59.460262       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1207 21:07:59.460973       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1207 21:07:59.461048       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1207 21:07:59.461091       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 21:07:59.461712       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 21:07:59.461888       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [284e513959658a57d171808e0788c6026cbf12c84885f77d2b56924ebb961190] <==
	* I1207 21:08:04.060742       1 serving.go:348] Generated self-signed cert in-memory
	W1207 21:08:06.764958       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 21:08:06.765037       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 21:08:06.765065       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 21:08:06.765088       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 21:08:06.822976       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1207 21:08:06.823059       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:08:06.826516       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 21:08:06.826639       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 21:08:06.828950       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1207 21:08:06.829037       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 21:08:06.928346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:05:13 UTC, ends at Thu 2023-12-07 21:08:28 UTC. --
	Dec 07 21:08:01 pause-763966 kubelet[3517]: W1207 21:08:01.975396    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-763966&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:01 pause-763966 kubelet[3517]: E1207 21:08:01.975513    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-763966&limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.068320    3517 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-763966.179ea8c6d41a293d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-763966", UID:"pause-763966", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-763966"}, FirstTimestamp:time.Date(2023, time.December, 7, 21, 8, 1, 108101437, time.Local), LastTimestamp:time.Date(2
023, time.December, 7, 21, 8, 1, 108101437, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-763966"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.237:8443: connect: connection refused'(may retry after sleeping)
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.534016    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.534070    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.539675    3517 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-763966?timeout=10s\": dial tcp 192.168.39.237:8443: connect: connection refused" interval="1.6s"
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.591324    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.591374    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: W1207 21:08:02.639194    3517 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.639250    3517 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.237:8443: connect: connection refused
	Dec 07 21:08:02 pause-763966 kubelet[3517]: I1207 21:08:02.652717    3517 kubelet_node_status.go:70] "Attempting to register node" node="pause-763966"
	Dec 07 21:08:02 pause-763966 kubelet[3517]: E1207 21:08:02.653114    3517 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.237:8443: connect: connection refused" node="pause-763966"
	Dec 07 21:08:04 pause-763966 kubelet[3517]: I1207 21:08:04.254838    3517 kubelet_node_status.go:70] "Attempting to register node" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.877498    3517 kubelet_node_status.go:108] "Node was previously registered" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.877598    3517 kubelet_node_status.go:73] "Successfully registered node" node="pause-763966"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.879625    3517 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 21:08:06 pause-763966 kubelet[3517]: I1207 21:08:06.880542    3517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.102281    3517 apiserver.go:52] "Watching apiserver"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.111358    3517 topology_manager.go:215] "Topology Admit Handler" podUID="fb4ba2f0-5660-4044-9f09-2af3a79c8599" podNamespace="kube-system" podName="kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.114905    3517 topology_manager.go:215] "Topology Admit Handler" podUID="0336a5ef-6d08-4058-acfe-4ec206ae8c93" podNamespace="kube-system" podName="coredns-5dd5756b68-l6llq"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.133473    3517 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.177826    3517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb4ba2f0-5660-4044-9f09-2af3a79c8599-lib-modules\") pod \"kube-proxy-w976v\" (UID: \"fb4ba2f0-5660-4044-9f09-2af3a79c8599\") " pod="kube-system/kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.177857    3517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb4ba2f0-5660-4044-9f09-2af3a79c8599-xtables-lock\") pod \"kube-proxy-w976v\" (UID: \"fb4ba2f0-5660-4044-9f09-2af3a79c8599\") " pod="kube-system/kube-proxy-w976v"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.416235    3517 scope.go:117] "RemoveContainer" containerID="fcedf568f2752dff3383726802fa736366021cec7ba5fa260f2fd00e26b7952f"
	Dec 07 21:08:07 pause-763966 kubelet[3517]: I1207 21:08:07.416646    3517 scope.go:117] "RemoveContainer" containerID="f03715579d42e52d3a0a2671955ab96bdee433d2a541561202cc2bebc8ce6447"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:08:26.536665   49007 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17719-9628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763966 -n pause-763966
helpers_test.go:261: (dbg) Run:  kubectl --context pause-763966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (106.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-483745 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-483745 --alsologtostderr -v=3: exit status 82 (2m1.785496727s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-483745"  ...
	* Stopping node "old-k8s-version-483745"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:08:10.877682   48742 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:08:10.877833   48742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:08:10.877845   48742 out.go:309] Setting ErrFile to fd 2...
	I1207 21:08:10.877852   48742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:08:10.878071   48742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:08:10.878369   48742 out.go:303] Setting JSON to false
	I1207 21:08:10.878499   48742 mustload.go:65] Loading cluster: old-k8s-version-483745
	I1207 21:08:10.878928   48742 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:08:10.879044   48742 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:08:10.879274   48742 mustload.go:65] Loading cluster: old-k8s-version-483745
	I1207 21:08:10.879442   48742 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:08:10.879486   48742 stop.go:39] StopHost: old-k8s-version-483745
	I1207 21:08:10.880029   48742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:08:10.880094   48742 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:08:10.895474   48742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I1207 21:08:10.895969   48742 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:08:10.896537   48742 main.go:141] libmachine: Using API Version  1
	I1207 21:08:10.896557   48742 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:08:10.897061   48742 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:08:10.899717   48742 out.go:177] * Stopping node "old-k8s-version-483745"  ...
	I1207 21:08:10.901191   48742 main.go:141] libmachine: Stopping "old-k8s-version-483745"...
	I1207 21:08:10.901243   48742 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:08:10.903338   48742 main.go:141] libmachine: (old-k8s-version-483745) Calling .Stop
	I1207 21:08:10.906912   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 0/60
	I1207 21:08:11.908737   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 1/60
	I1207 21:08:12.910877   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 2/60
	I1207 21:08:13.912525   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 3/60
	I1207 21:08:14.913803   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 4/60
	I1207 21:08:15.915664   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 5/60
	I1207 21:08:16.917557   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 6/60
	I1207 21:08:17.919759   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 7/60
	I1207 21:08:18.921561   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 8/60
	I1207 21:08:19.923039   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 9/60
	I1207 21:08:20.925525   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 10/60
	I1207 21:08:21.926951   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 11/60
	I1207 21:08:22.928787   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 12/60
	I1207 21:08:23.930236   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 13/60
	I1207 21:08:24.932376   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 14/60
	I1207 21:08:25.934270   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 15/60
	I1207 21:08:26.936670   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 16/60
	I1207 21:08:27.938183   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 17/60
	I1207 21:08:29.286542   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 18/60
	I1207 21:08:30.288164   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 19/60
	I1207 21:08:31.290337   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 20/60
	I1207 21:08:32.292765   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 21/60
	I1207 21:08:33.294468   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 22/60
	I1207 21:08:34.296475   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 23/60
	I1207 21:08:35.297993   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 24/60
	I1207 21:08:36.299672   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 25/60
	I1207 21:08:37.301117   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 26/60
	I1207 21:08:38.303537   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 27/60
	I1207 21:08:39.305575   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 28/60
	I1207 21:08:40.307541   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 29/60
	I1207 21:08:41.309235   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 30/60
	I1207 21:08:42.310798   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 31/60
	I1207 21:08:43.312860   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 32/60
	I1207 21:08:44.314417   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 33/60
	I1207 21:08:45.316643   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 34/60
	I1207 21:08:46.319113   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 35/60
	I1207 21:08:47.320318   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 36/60
	I1207 21:08:48.322017   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 37/60
	I1207 21:08:49.323290   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 38/60
	I1207 21:08:50.324802   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 39/60
	I1207 21:08:51.327056   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 40/60
	I1207 21:08:52.328606   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 41/60
	I1207 21:08:53.329885   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 42/60
	I1207 21:08:54.331262   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 43/60
	I1207 21:08:55.332431   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 44/60
	I1207 21:08:56.334493   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 45/60
	I1207 21:08:57.336636   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 46/60
	I1207 21:08:58.339259   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 47/60
	I1207 21:08:59.340834   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 48/60
	I1207 21:09:00.342069   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 49/60
	I1207 21:09:01.344029   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 50/60
	I1207 21:09:02.345881   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 51/60
	I1207 21:09:03.347376   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 52/60
	I1207 21:09:04.349610   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 53/60
	I1207 21:09:05.351190   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 54/60
	I1207 21:09:06.353150   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 55/60
	I1207 21:09:07.355175   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 56/60
	I1207 21:09:08.356611   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 57/60
	I1207 21:09:09.358180   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 58/60
	I1207 21:09:10.360780   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 59/60
	I1207 21:09:11.361446   48742 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:09:11.361498   48742 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:09:11.361535   48742 retry.go:31] will retry after 1.102111835s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:09:12.464397   48742 stop.go:39] StopHost: old-k8s-version-483745
	I1207 21:09:12.464866   48742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:09:12.464925   48742 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:09:12.484881   48742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1207 21:09:12.485524   48742 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:09:12.486204   48742 main.go:141] libmachine: Using API Version  1
	I1207 21:09:12.486243   48742 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:09:12.486637   48742 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:09:12.488371   48742 out.go:177] * Stopping node "old-k8s-version-483745"  ...
	I1207 21:09:12.489996   48742 main.go:141] libmachine: Stopping "old-k8s-version-483745"...
	I1207 21:09:12.490018   48742 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:09:12.492051   48742 main.go:141] libmachine: (old-k8s-version-483745) Calling .Stop
	I1207 21:09:12.496225   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 0/60
	I1207 21:09:13.498152   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 1/60
	I1207 21:09:14.500808   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 2/60
	I1207 21:09:15.502285   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 3/60
	I1207 21:09:16.504803   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 4/60
	I1207 21:09:17.506737   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 5/60
	I1207 21:09:18.508463   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 6/60
	I1207 21:09:19.510095   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 7/60
	I1207 21:09:20.511499   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 8/60
	I1207 21:09:21.512951   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 9/60
	I1207 21:09:22.514815   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 10/60
	I1207 21:09:23.516502   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 11/60
	I1207 21:09:24.517916   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 12/60
	I1207 21:09:25.519597   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 13/60
	I1207 21:09:26.521055   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 14/60
	I1207 21:09:27.522917   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 15/60
	I1207 21:09:28.524422   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 16/60
	I1207 21:09:29.525744   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 17/60
	I1207 21:09:30.528002   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 18/60
	I1207 21:09:31.529369   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 19/60
	I1207 21:09:32.531192   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 20/60
	I1207 21:09:33.532607   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 21/60
	I1207 21:09:34.534088   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 22/60
	I1207 21:09:35.535413   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 23/60
	I1207 21:09:36.536657   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 24/60
	I1207 21:09:37.538296   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 25/60
	I1207 21:09:38.539634   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 26/60
	I1207 21:09:39.541683   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 27/60
	I1207 21:09:40.543245   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 28/60
	I1207 21:09:41.544620   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 29/60
	I1207 21:09:42.546254   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 30/60
	I1207 21:09:43.548427   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 31/60
	I1207 21:09:44.549817   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 32/60
	I1207 21:09:45.551160   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 33/60
	I1207 21:09:46.552447   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 34/60
	I1207 21:09:47.554118   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 35/60
	I1207 21:09:48.555539   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 36/60
	I1207 21:09:49.556758   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 37/60
	I1207 21:09:50.558241   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 38/60
	I1207 21:09:51.559605   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 39/60
	I1207 21:09:52.561434   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 40/60
	I1207 21:09:53.562886   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 41/60
	I1207 21:09:54.564398   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 42/60
	I1207 21:09:55.566025   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 43/60
	I1207 21:09:56.567174   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 44/60
	I1207 21:09:57.568893   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 45/60
	I1207 21:09:58.570053   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 46/60
	I1207 21:09:59.571371   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 47/60
	I1207 21:10:00.573657   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 48/60
	I1207 21:10:01.574935   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 49/60
	I1207 21:10:02.577225   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 50/60
	I1207 21:10:03.578611   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 51/60
	I1207 21:10:04.580282   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 52/60
	I1207 21:10:05.581835   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 53/60
	I1207 21:10:06.583317   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 54/60
	I1207 21:10:07.585718   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 55/60
	I1207 21:10:08.587065   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 56/60
	I1207 21:10:09.588605   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 57/60
	I1207 21:10:10.589981   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 58/60
	I1207 21:10:11.591464   48742 main.go:141] libmachine: (old-k8s-version-483745) Waiting for machine to stop 59/60
	I1207 21:10:12.592639   48742 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:10:12.592691   48742 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:10:12.594641   48742 out.go:177] 
	W1207 21:10:12.596098   48742 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1207 21:10:12.596116   48742 out.go:239] * 
	* 
	W1207 21:10:12.598648   48742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:10:12.600260   48742 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-483745 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745: exit status 3 (18.456791148s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:10:31.058245   49953 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host
	E1207 21:10:31.058266   49953 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-483745" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-598346 --alsologtostderr -v=3
E1207 21:09:28.942092   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-598346 --alsologtostderr -v=3: exit status 82 (2m1.440294721s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-598346"  ...
	* Stopping node "embed-certs-598346"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:09:03.914355   49603 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:09:03.914562   49603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:09:03.914574   49603 out.go:309] Setting ErrFile to fd 2...
	I1207 21:09:03.914581   49603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:09:03.914885   49603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:09:03.915225   49603 out.go:303] Setting JSON to false
	I1207 21:09:03.915323   49603 mustload.go:65] Loading cluster: embed-certs-598346
	I1207 21:09:03.915833   49603 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:09:03.915921   49603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:09:03.916688   49603 mustload.go:65] Loading cluster: embed-certs-598346
	I1207 21:09:03.916851   49603 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:09:03.916905   49603 stop.go:39] StopHost: embed-certs-598346
	I1207 21:09:03.917495   49603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:09:03.917557   49603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:09:03.936965   49603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I1207 21:09:03.941020   49603 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:09:03.941814   49603 main.go:141] libmachine: Using API Version  1
	I1207 21:09:03.941839   49603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:09:03.942223   49603 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:09:03.944565   49603 out.go:177] * Stopping node "embed-certs-598346"  ...
	I1207 21:09:03.946514   49603 main.go:141] libmachine: Stopping "embed-certs-598346"...
	I1207 21:09:03.946536   49603 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:09:03.948628   49603 main.go:141] libmachine: (embed-certs-598346) Calling .Stop
	I1207 21:09:03.952038   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 0/60
	I1207 21:09:04.953855   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 1/60
	I1207 21:09:05.955565   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 2/60
	I1207 21:09:06.956924   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 3/60
	I1207 21:09:07.958556   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 4/60
	I1207 21:09:08.960399   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 5/60
	I1207 21:09:09.962107   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 6/60
	I1207 21:09:10.963610   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 7/60
	I1207 21:09:11.965313   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 8/60
	I1207 21:09:12.967053   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 9/60
	I1207 21:09:13.969594   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 10/60
	I1207 21:09:14.971133   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 11/60
	I1207 21:09:15.973103   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 12/60
	I1207 21:09:16.975084   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 13/60
	I1207 21:09:17.976578   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 14/60
	I1207 21:09:18.978577   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 15/60
	I1207 21:09:19.980941   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 16/60
	I1207 21:09:20.982557   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 17/60
	I1207 21:09:21.984058   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 18/60
	I1207 21:09:22.985539   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 19/60
	I1207 21:09:23.987742   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 20/60
	I1207 21:09:24.989062   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 21/60
	I1207 21:09:25.990936   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 22/60
	I1207 21:09:26.992729   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 23/60
	I1207 21:09:27.994961   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 24/60
	I1207 21:09:28.996661   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 25/60
	I1207 21:09:29.997957   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 26/60
	I1207 21:09:30.999693   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 27/60
	I1207 21:09:32.001675   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 28/60
	I1207 21:09:33.003356   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 29/60
	I1207 21:09:34.005043   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 30/60
	I1207 21:09:35.006395   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 31/60
	I1207 21:09:36.008590   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 32/60
	I1207 21:09:37.010185   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 33/60
	I1207 21:09:38.012435   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 34/60
	I1207 21:09:39.014032   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 35/60
	I1207 21:09:40.015443   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 36/60
	I1207 21:09:41.016760   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 37/60
	I1207 21:09:42.018062   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 38/60
	I1207 21:09:43.020374   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 39/60
	I1207 21:09:44.022278   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 40/60
	I1207 21:09:45.024476   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 41/60
	I1207 21:09:46.025872   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 42/60
	I1207 21:09:47.027406   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 43/60
	I1207 21:09:48.029126   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 44/60
	I1207 21:09:49.031088   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 45/60
	I1207 21:09:50.032556   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 46/60
	I1207 21:09:51.033887   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 47/60
	I1207 21:09:52.035319   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 48/60
	I1207 21:09:53.036637   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 49/60
	I1207 21:09:54.038482   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 50/60
	I1207 21:09:55.040454   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 51/60
	I1207 21:09:56.041879   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 52/60
	I1207 21:09:57.043205   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 53/60
	I1207 21:09:58.044622   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 54/60
	I1207 21:09:59.046725   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 55/60
	I1207 21:10:00.048554   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 56/60
	I1207 21:10:01.049868   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 57/60
	I1207 21:10:02.051245   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 58/60
	I1207 21:10:03.052604   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 59/60
	I1207 21:10:04.053199   49603 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:10:04.053271   49603 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:10:04.053296   49603 retry.go:31] will retry after 1.10674266s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:10:05.160516   49603 stop.go:39] StopHost: embed-certs-598346
	I1207 21:10:05.160911   49603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:10:05.160956   49603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:10:05.176026   49603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45391
	I1207 21:10:05.176477   49603 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:10:05.176906   49603 main.go:141] libmachine: Using API Version  1
	I1207 21:10:05.176927   49603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:10:05.177274   49603 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:10:05.179642   49603 out.go:177] * Stopping node "embed-certs-598346"  ...
	I1207 21:10:05.181264   49603 main.go:141] libmachine: Stopping "embed-certs-598346"...
	I1207 21:10:05.181280   49603 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:10:05.183044   49603 main.go:141] libmachine: (embed-certs-598346) Calling .Stop
	I1207 21:10:05.186615   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 0/60
	I1207 21:10:06.188794   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 1/60
	I1207 21:10:07.191236   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 2/60
	I1207 21:10:08.192706   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 3/60
	I1207 21:10:09.194043   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 4/60
	I1207 21:10:10.195903   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 5/60
	I1207 21:10:11.197521   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 6/60
	I1207 21:10:12.198985   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 7/60
	I1207 21:10:13.200517   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 8/60
	I1207 21:10:14.203015   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 9/60
	I1207 21:10:15.205101   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 10/60
	I1207 21:10:16.206828   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 11/60
	I1207 21:10:17.208604   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 12/60
	I1207 21:10:18.210014   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 13/60
	I1207 21:10:19.211542   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 14/60
	I1207 21:10:20.213154   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 15/60
	I1207 21:10:21.214640   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 16/60
	I1207 21:10:22.216238   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 17/60
	I1207 21:10:23.217781   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 18/60
	I1207 21:10:24.219021   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 19/60
	I1207 21:10:25.220910   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 20/60
	I1207 21:10:26.222136   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 21/60
	I1207 21:10:27.223462   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 22/60
	I1207 21:10:28.224690   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 23/60
	I1207 21:10:29.225997   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 24/60
	I1207 21:10:30.227726   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 25/60
	I1207 21:10:31.228777   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 26/60
	I1207 21:10:32.230081   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 27/60
	I1207 21:10:33.231398   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 28/60
	I1207 21:10:34.232632   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 29/60
	I1207 21:10:35.234761   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 30/60
	I1207 21:10:36.236196   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 31/60
	I1207 21:10:37.237795   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 32/60
	I1207 21:10:38.239133   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 33/60
	I1207 21:10:39.240590   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 34/60
	I1207 21:10:40.242395   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 35/60
	I1207 21:10:41.243704   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 36/60
	I1207 21:10:42.245232   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 37/60
	I1207 21:10:43.246851   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 38/60
	I1207 21:10:44.248379   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 39/60
	I1207 21:10:45.250106   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 40/60
	I1207 21:10:46.251644   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 41/60
	I1207 21:10:47.252946   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 42/60
	I1207 21:10:48.254470   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 43/60
	I1207 21:10:49.255836   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 44/60
	I1207 21:10:50.257484   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 45/60
	I1207 21:10:51.258656   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 46/60
	I1207 21:10:52.259860   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 47/60
	I1207 21:10:53.261259   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 48/60
	I1207 21:10:54.262608   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 49/60
	I1207 21:10:55.264352   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 50/60
	I1207 21:10:56.265702   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 51/60
	I1207 21:10:57.266944   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 52/60
	I1207 21:10:58.268252   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 53/60
	I1207 21:10:59.269542   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 54/60
	I1207 21:11:00.271159   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 55/60
	I1207 21:11:01.272494   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 56/60
	I1207 21:11:02.274425   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 57/60
	I1207 21:11:03.275782   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 58/60
	I1207 21:11:04.277196   49603 main.go:141] libmachine: (embed-certs-598346) Waiting for machine to stop 59/60
	I1207 21:11:05.278243   49603 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:11:05.278287   49603 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:11:05.280300   49603 out.go:177] 
	W1207 21:11:05.281808   49603 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1207 21:11:05.281826   49603 out.go:239] * 
	* 
	W1207 21:11:05.284319   49603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:11:05.285807   49603 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-598346 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
E1207 21:11:05.939870   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346: exit status 3 (18.511089757s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:11:23.798118   50395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1207 21:11:23.798131   50395 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-598346" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-950431 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-950431 --alsologtostderr -v=3: exit status 82 (2m0.827332589s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-950431"  ...
	* Stopping node "no-preload-950431"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:10:19.855219   50059 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:10:19.855336   50059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:10:19.855344   50059 out.go:309] Setting ErrFile to fd 2...
	I1207 21:10:19.855348   50059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:10:19.855545   50059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:10:19.855759   50059 out.go:303] Setting JSON to false
	I1207 21:10:19.855835   50059 mustload.go:65] Loading cluster: no-preload-950431
	I1207 21:10:19.856171   50059 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:10:19.856241   50059 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:10:19.856403   50059 mustload.go:65] Loading cluster: no-preload-950431
	I1207 21:10:19.856504   50059 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:10:19.856530   50059 stop.go:39] StopHost: no-preload-950431
	I1207 21:10:19.856923   50059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:10:19.856977   50059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:10:19.871749   50059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I1207 21:10:19.872194   50059 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:10:19.872868   50059 main.go:141] libmachine: Using API Version  1
	I1207 21:10:19.872894   50059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:10:19.873234   50059 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:10:19.875571   50059 out.go:177] * Stopping node "no-preload-950431"  ...
	I1207 21:10:19.877294   50059 main.go:141] libmachine: Stopping "no-preload-950431"...
	I1207 21:10:19.877323   50059 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:10:19.879200   50059 main.go:141] libmachine: (no-preload-950431) Calling .Stop
	I1207 21:10:19.882532   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 0/60
	I1207 21:10:20.883866   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 1/60
	I1207 21:10:21.885240   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 2/60
	I1207 21:10:22.886711   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 3/60
	I1207 21:10:23.887963   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 4/60
	I1207 21:10:24.889757   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 5/60
	I1207 21:10:25.891267   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 6/60
	I1207 21:10:26.892584   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 7/60
	I1207 21:10:27.894018   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 8/60
	I1207 21:10:28.895369   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 9/60
	I1207 21:10:29.896756   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 10/60
	I1207 21:10:30.898134   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 11/60
	I1207 21:10:31.899498   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 12/60
	I1207 21:10:32.900843   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 13/60
	I1207 21:10:33.902283   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 14/60
	I1207 21:10:34.904708   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 15/60
	I1207 21:10:35.906235   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 16/60
	I1207 21:10:36.907547   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 17/60
	I1207 21:10:37.909074   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 18/60
	I1207 21:10:38.910594   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 19/60
	I1207 21:10:39.912928   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 20/60
	I1207 21:10:40.914283   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 21/60
	I1207 21:10:41.915529   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 22/60
	I1207 21:10:42.917055   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 23/60
	I1207 21:10:43.918353   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 24/60
	I1207 21:10:44.920257   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 25/60
	I1207 21:10:45.921635   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 26/60
	I1207 21:10:46.922907   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 27/60
	I1207 21:10:47.924319   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 28/60
	I1207 21:10:48.925565   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 29/60
	I1207 21:10:49.927684   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 30/60
	I1207 21:10:50.929066   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 31/60
	I1207 21:10:51.930353   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 32/60
	I1207 21:10:52.931655   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 33/60
	I1207 21:10:53.932996   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 34/60
	I1207 21:10:54.935146   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 35/60
	I1207 21:10:55.936521   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 36/60
	I1207 21:10:56.937984   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 37/60
	I1207 21:10:57.939338   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 38/60
	I1207 21:10:58.940824   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 39/60
	I1207 21:10:59.943295   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 40/60
	I1207 21:11:00.944659   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 41/60
	I1207 21:11:01.945901   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 42/60
	I1207 21:11:02.947582   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 43/60
	I1207 21:11:03.948978   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 44/60
	I1207 21:11:04.950379   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 45/60
	I1207 21:11:05.951835   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 46/60
	I1207 21:11:06.953249   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 47/60
	I1207 21:11:07.954580   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 48/60
	I1207 21:11:08.955868   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 49/60
	I1207 21:11:09.957212   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 50/60
	I1207 21:11:10.958707   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 51/60
	I1207 21:11:11.960203   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 52/60
	I1207 21:11:12.961546   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 53/60
	I1207 21:11:13.963883   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 54/60
	I1207 21:11:14.965777   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 55/60
	I1207 21:11:15.967070   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 56/60
	I1207 21:11:16.968382   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 57/60
	I1207 21:11:17.969572   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 58/60
	I1207 21:11:18.970800   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 59/60
	I1207 21:11:19.972205   50059 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:11:19.972258   50059 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:11:19.972279   50059 retry.go:31] will retry after 532.999544ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:11:20.505974   50059 stop.go:39] StopHost: no-preload-950431
	I1207 21:11:20.506317   50059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:11:20.506370   50059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:11:20.520793   50059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I1207 21:11:20.521184   50059 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:11:20.521637   50059 main.go:141] libmachine: Using API Version  1
	I1207 21:11:20.521658   50059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:11:20.521956   50059 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:11:20.523878   50059 out.go:177] * Stopping node "no-preload-950431"  ...
	I1207 21:11:20.525399   50059 main.go:141] libmachine: Stopping "no-preload-950431"...
	I1207 21:11:20.525414   50059 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:11:20.526911   50059 main.go:141] libmachine: (no-preload-950431) Calling .Stop
	I1207 21:11:20.529667   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 0/60
	I1207 21:11:21.531065   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 1/60
	I1207 21:11:22.532152   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 2/60
	I1207 21:11:23.533423   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 3/60
	I1207 21:11:24.534764   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 4/60
	I1207 21:11:25.536384   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 5/60
	I1207 21:11:26.537739   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 6/60
	I1207 21:11:27.539017   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 7/60
	I1207 21:11:28.540285   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 8/60
	I1207 21:11:29.541597   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 9/60
	I1207 21:11:30.543299   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 10/60
	I1207 21:11:31.544596   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 11/60
	I1207 21:11:32.546062   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 12/60
	I1207 21:11:33.547344   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 13/60
	I1207 21:11:34.548700   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 14/60
	I1207 21:11:35.550051   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 15/60
	I1207 21:11:36.551523   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 16/60
	I1207 21:11:37.552963   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 17/60
	I1207 21:11:38.554241   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 18/60
	I1207 21:11:39.555686   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 19/60
	I1207 21:11:40.557233   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 20/60
	I1207 21:11:41.558598   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 21/60
	I1207 21:11:42.559861   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 22/60
	I1207 21:11:43.561285   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 23/60
	I1207 21:11:44.562611   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 24/60
	I1207 21:11:45.564218   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 25/60
	I1207 21:11:46.565582   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 26/60
	I1207 21:11:47.567003   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 27/60
	I1207 21:11:48.568396   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 28/60
	I1207 21:11:49.569853   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 29/60
	I1207 21:11:50.571373   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 30/60
	I1207 21:11:51.572740   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 31/60
	I1207 21:11:52.574125   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 32/60
	I1207 21:11:53.576217   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 33/60
	I1207 21:11:54.578661   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 34/60
	I1207 21:11:55.580020   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 35/60
	I1207 21:11:56.581375   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 36/60
	I1207 21:11:57.582985   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 37/60
	I1207 21:11:58.584399   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 38/60
	I1207 21:11:59.585899   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 39/60
	I1207 21:12:00.588021   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 40/60
	I1207 21:12:01.589375   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 41/60
	I1207 21:12:02.590764   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 42/60
	I1207 21:12:03.592184   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 43/60
	I1207 21:12:04.593826   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 44/60
	I1207 21:12:05.595380   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 45/60
	I1207 21:12:06.596965   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 46/60
	I1207 21:12:07.598796   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 47/60
	I1207 21:12:08.600113   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 48/60
	I1207 21:12:09.601599   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 49/60
	I1207 21:12:10.603278   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 50/60
	I1207 21:12:11.604782   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 51/60
	I1207 21:12:12.606353   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 52/60
	I1207 21:12:13.608471   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 53/60
	I1207 21:12:14.609699   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 54/60
	I1207 21:12:15.610943   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 55/60
	I1207 21:12:16.612317   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 56/60
	I1207 21:12:17.613475   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 57/60
	I1207 21:12:18.614880   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 58/60
	I1207 21:12:19.616169   50059 main.go:141] libmachine: (no-preload-950431) Waiting for machine to stop 59/60
	I1207 21:12:20.616689   50059 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:12:20.616739   50059 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:12:20.618638   50059 out.go:177] 
	W1207 21:12:20.620088   50059 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1207 21:12:20.620109   50059 out.go:239] * 
	* 
	W1207 21:12:20.622490   50059 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:12:20.624134   50059 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-950431 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431: exit status 3 (18.688917153s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:39.314321   50779 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host
	E1207 21:12:39.314365   50779 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-950431" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-275828 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-275828 --alsologtostderr -v=3: exit status 82 (2m1.388083454s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-275828"  ...
	* Stopping node "default-k8s-diff-port-275828"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:10:22.560924   50127 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:10:22.561041   50127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:10:22.561050   50127 out.go:309] Setting ErrFile to fd 2...
	I1207 21:10:22.561055   50127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:10:22.561219   50127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:10:22.561440   50127 out.go:303] Setting JSON to false
	I1207 21:10:22.561515   50127 mustload.go:65] Loading cluster: default-k8s-diff-port-275828
	I1207 21:10:22.561846   50127 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:10:22.561941   50127 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:10:22.562115   50127 mustload.go:65] Loading cluster: default-k8s-diff-port-275828
	I1207 21:10:22.562222   50127 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:10:22.562246   50127 stop.go:39] StopHost: default-k8s-diff-port-275828
	I1207 21:10:22.562645   50127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:10:22.562687   50127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:10:22.576507   50127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I1207 21:10:22.576955   50127 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:10:22.577528   50127 main.go:141] libmachine: Using API Version  1
	I1207 21:10:22.577558   50127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:10:22.577940   50127 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:10:22.580488   50127 out.go:177] * Stopping node "default-k8s-diff-port-275828"  ...
	I1207 21:10:22.582289   50127 main.go:141] libmachine: Stopping "default-k8s-diff-port-275828"...
	I1207 21:10:22.582304   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:10:22.583830   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Stop
	I1207 21:10:22.587041   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 0/60
	I1207 21:10:23.588409   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 1/60
	I1207 21:10:24.589701   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 2/60
	I1207 21:10:25.591128   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 3/60
	I1207 21:10:26.592341   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 4/60
	I1207 21:10:27.594824   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 5/60
	I1207 21:10:28.596300   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 6/60
	I1207 21:10:29.597681   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 7/60
	I1207 21:10:30.599035   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 8/60
	I1207 21:10:31.600290   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 9/60
	I1207 21:10:32.602486   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 10/60
	I1207 21:10:33.603975   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 11/60
	I1207 21:10:34.605609   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 12/60
	I1207 21:10:35.607195   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 13/60
	I1207 21:10:36.608365   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 14/60
	I1207 21:10:37.610220   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 15/60
	I1207 21:10:38.611791   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 16/60
	I1207 21:10:39.613259   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 17/60
	I1207 21:10:40.614675   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 18/60
	I1207 21:10:41.615935   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 19/60
	I1207 21:10:42.618516   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 20/60
	I1207 21:10:43.620252   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 21/60
	I1207 21:10:44.621594   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 22/60
	I1207 21:10:45.623065   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 23/60
	I1207 21:10:46.624346   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 24/60
	I1207 21:10:47.626672   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 25/60
	I1207 21:10:48.628449   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 26/60
	I1207 21:10:49.629673   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 27/60
	I1207 21:10:50.631254   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 28/60
	I1207 21:10:51.632516   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 29/60
	I1207 21:10:52.633840   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 30/60
	I1207 21:10:53.635278   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 31/60
	I1207 21:10:54.636767   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 32/60
	I1207 21:10:55.638376   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 33/60
	I1207 21:10:56.639887   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 34/60
	I1207 21:10:57.642146   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 35/60
	I1207 21:10:58.643424   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 36/60
	I1207 21:10:59.645017   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 37/60
	I1207 21:11:00.646214   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 38/60
	I1207 21:11:01.647571   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 39/60
	I1207 21:11:02.649780   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 40/60
	I1207 21:11:03.651067   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 41/60
	I1207 21:11:04.652549   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 42/60
	I1207 21:11:05.654012   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 43/60
	I1207 21:11:06.655561   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 44/60
	I1207 21:11:07.657694   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 45/60
	I1207 21:11:08.659014   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 46/60
	I1207 21:11:09.660658   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 47/60
	I1207 21:11:10.662000   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 48/60
	I1207 21:11:11.663265   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 49/60
	I1207 21:11:12.665088   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 50/60
	I1207 21:11:13.666631   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 51/60
	I1207 21:11:14.668381   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 52/60
	I1207 21:11:15.669735   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 53/60
	I1207 21:11:16.670927   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 54/60
	I1207 21:11:17.672787   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 55/60
	I1207 21:11:18.674181   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 56/60
	I1207 21:11:19.675399   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 57/60
	I1207 21:11:20.676722   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 58/60
	I1207 21:11:21.677991   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 59/60
	I1207 21:11:22.679337   50127 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:11:22.679382   50127 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:11:22.679399   50127 retry.go:31] will retry after 1.089762171s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:11:23.769612   50127 stop.go:39] StopHost: default-k8s-diff-port-275828
	I1207 21:11:23.770025   50127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:11:23.770067   50127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:11:23.784070   50127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1207 21:11:23.784500   50127 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:11:23.784946   50127 main.go:141] libmachine: Using API Version  1
	I1207 21:11:23.784979   50127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:11:23.785303   50127 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:11:23.787340   50127 out.go:177] * Stopping node "default-k8s-diff-port-275828"  ...
	I1207 21:11:23.788669   50127 main.go:141] libmachine: Stopping "default-k8s-diff-port-275828"...
	I1207 21:11:23.788694   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:11:23.790333   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Stop
	I1207 21:11:23.793501   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 0/60
	I1207 21:11:24.794665   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 1/60
	I1207 21:11:25.796162   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 2/60
	I1207 21:11:26.797352   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 3/60
	I1207 21:11:27.798879   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 4/60
	I1207 21:11:28.800731   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 5/60
	I1207 21:11:29.802016   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 6/60
	I1207 21:11:30.803332   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 7/60
	I1207 21:11:31.804750   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 8/60
	I1207 21:11:32.806138   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 9/60
	I1207 21:11:33.808100   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 10/60
	I1207 21:11:34.809546   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 11/60
	I1207 21:11:35.811024   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 12/60
	I1207 21:11:36.812630   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 13/60
	I1207 21:11:37.814227   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 14/60
	I1207 21:11:38.816263   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 15/60
	I1207 21:11:39.817654   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 16/60
	I1207 21:11:40.819098   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 17/60
	I1207 21:11:41.820476   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 18/60
	I1207 21:11:42.822008   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 19/60
	I1207 21:11:43.824054   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 20/60
	I1207 21:11:44.825415   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 21/60
	I1207 21:11:45.826847   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 22/60
	I1207 21:11:46.828158   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 23/60
	I1207 21:11:47.829664   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 24/60
	I1207 21:11:48.831812   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 25/60
	I1207 21:11:49.833204   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 26/60
	I1207 21:11:50.834615   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 27/60
	I1207 21:11:51.835899   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 28/60
	I1207 21:11:52.837337   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 29/60
	I1207 21:11:53.839209   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 30/60
	I1207 21:11:54.840651   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 31/60
	I1207 21:11:55.841972   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 32/60
	I1207 21:11:56.843952   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 33/60
	I1207 21:11:57.845512   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 34/60
	I1207 21:11:58.847367   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 35/60
	I1207 21:11:59.848926   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 36/60
	I1207 21:12:00.850320   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 37/60
	I1207 21:12:01.851742   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 38/60
	I1207 21:12:02.853058   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 39/60
	I1207 21:12:03.854800   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 40/60
	I1207 21:12:04.856337   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 41/60
	I1207 21:12:05.857638   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 42/60
	I1207 21:12:06.859160   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 43/60
	I1207 21:12:07.860578   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 44/60
	I1207 21:12:08.862341   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 45/60
	I1207 21:12:09.864307   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 46/60
	I1207 21:12:10.865602   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 47/60
	I1207 21:12:11.866969   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 48/60
	I1207 21:12:12.868708   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 49/60
	I1207 21:12:13.870541   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 50/60
	I1207 21:12:14.872037   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 51/60
	I1207 21:12:15.873375   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 52/60
	I1207 21:12:16.874747   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 53/60
	I1207 21:12:17.876046   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 54/60
	I1207 21:12:18.877778   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 55/60
	I1207 21:12:19.878988   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 56/60
	I1207 21:12:20.880526   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 57/60
	I1207 21:12:21.881764   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 58/60
	I1207 21:12:22.883137   50127 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for machine to stop 59/60
	I1207 21:12:23.884031   50127 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:12:23.884070   50127 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:12:23.886212   50127 out.go:177] 
	W1207 21:12:23.887706   50127 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1207 21:12:23.887722   50127 out.go:239] * 
	* 
	W1207 21:12:23.890031   50127 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:12:23.891553   50127 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-275828 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828: exit status 3 (18.492716798s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:42.386298   50809 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host
	E1207 21:12:42.386317   50809 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-275828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745: exit status 3 (3.199925261s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:10:34.258318   50167 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host
	E1207 21:10:34.258340   50167 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-483745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-483745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152603578s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-483745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745: exit status 3 (3.062701867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:10:43.474383   50239 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host
	E1207 21:10:43.474410   50239 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.171:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-483745" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346: exit status 3 (3.163867127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:11:26.962296   50507 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1207 21:11:26.962318   50507 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-598346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-598346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153230037s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-598346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346: exit status 3 (3.062583924s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:11:36.178343   50584 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1207 21:11:36.178363   50584 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-598346" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431: exit status 3 (3.171424229s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:42.486169   50872 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host
	E1207 21:12:42.486198   50872 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-950431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-950431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149935762s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-950431 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431: exit status 3 (3.063007834s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:51.698346   50996 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host
	E1207 21:12:51.698392   50996 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.100:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-950431" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828: exit status 3 (3.16839966s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:45.554297   50913 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host
	E1207 21:12:45.554319   50913 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-275828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-275828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155860002s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-275828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828: exit status 3 (3.059271026s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:12:54.770279   51043 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host
	E1207 21:12:54.770301   51043 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.254:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-275828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:21:05.939117   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598346 -n embed-certs-598346
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:30:01.940644115 +0000 UTC m=+5326.108789081
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-598346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-598346 logs -n 25: (1.606365473s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-620116 -- sudo                         | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-620116                                 | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:12:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:12:54.827966   51113 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:12:54.828121   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828131   51113 out.go:309] Setting ErrFile to fd 2...
	I1207 21:12:54.828138   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828309   51113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:12:54.828894   51113 out.go:303] Setting JSON to false
	I1207 21:12:54.829778   51113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6921,"bootTime":1701976654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:12:54.829872   51113 start.go:138] virtualization: kvm guest
	I1207 21:12:54.832359   51113 out.go:177] * [default-k8s-diff-port-275828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:12:54.833958   51113 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:12:54.833997   51113 notify.go:220] Checking for updates...
	I1207 21:12:54.835484   51113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:12:54.837345   51113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:12:54.838716   51113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:12:54.840105   51113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:12:54.841497   51113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:12:54.843170   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:12:54.843587   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.843638   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.857987   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I1207 21:12:54.858345   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.858826   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.858846   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.859141   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.859317   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.859528   51113 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:12:54.859797   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.859827   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.873523   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1207 21:12:54.873866   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.874374   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.874399   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.874726   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.874907   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.906909   51113 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:12:54.908496   51113 start.go:298] selected driver: kvm2
	I1207 21:12:54.908515   51113 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.908626   51113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:12:54.909287   51113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.909431   51113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:12:54.924711   51113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:12:54.925077   51113 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:12:54.925136   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:12:54.925149   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:12:54.925174   51113 start_flags.go:323] config:
	{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-27582
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.925311   51113 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.927216   51113 out.go:177] * Starting control plane node default-k8s-diff-port-275828 in cluster default-k8s-diff-port-275828
	I1207 21:12:51.859250   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:12:51.859366   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:12:51.859440   51037 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859507   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:12:51.859492   51037 cache.go:107] acquiring lock: {Name:mk57eae37995939df6ffd0df03832314e9e6100e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859493   51037 cache.go:107] acquiring lock: {Name:mk5a91936dc04372c96de7514149d2b4b0d17dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859522   51037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.402µs
	I1207 21:12:51.859538   51037 cache.go:107] acquiring lock: {Name:mk4c716c1104ca016c5e335d1cbf204f19d0197f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859560   51037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:12:51.859581   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 21:12:51.859591   51037 start.go:365] acquiring machines lock for no-preload-950431: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:12:51.859593   51037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 111.482µs
	I1207 21:12:51.859611   51037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859596   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 21:12:51.859564   51037 cache.go:107] acquiring lock: {Name:mke02250ffd1d3b6fb4470dd05093397053b289d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859627   51037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 139.857µs
	I1207 21:12:51.859637   51037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859588   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 21:12:51.859647   51037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 112.196µs
	I1207 21:12:51.859621   51037 cache.go:107] acquiring lock: {Name:mk2a1c8afaf74efaf0daac8bf102ee63aa4b5154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859664   51037 cache.go:107] acquiring lock: {Name:mk042626599761dccdc47fcf8ee95d59d24917b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859660   51037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 21:12:51.859443   51037 cache.go:107] acquiring lock: {Name:mk69e12850117516cff168d811605a739d29808c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859701   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 21:12:51.859715   51037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 185.872µs
	I1207 21:12:51.859736   51037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 21:12:51.859728   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 21:12:51.859750   51037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 313.668µs
	I1207 21:12:51.859758   51037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859796   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 21:12:51.859809   51037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 179.42µs
	I1207 21:12:51.859823   51037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859808   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1207 21:12:51.859910   51037 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 310.345µs
	I1207 21:12:51.859931   51037 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1207 21:12:51.859947   51037 cache.go:87] Successfully saved all images to host disk.
	I1207 21:12:57.714205   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:12:54.928473   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:12:54.928503   51113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:12:54.928516   51113 cache.go:56] Caching tarball of preloaded images
	I1207 21:12:54.928608   51113 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:12:54.928621   51113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:12:54.928718   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:12:54.928893   51113 start.go:365] acquiring machines lock for default-k8s-diff-port-275828: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:13:00.786234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:06.866234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:09.938211   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:16.018206   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:19.090196   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:25.170164   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:28.242299   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:34.322194   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:37.394241   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:43.474183   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:46.546186   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:52.626214   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:55.698176   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:01.778218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:04.850228   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:10.930239   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:14.002222   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:20.082270   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:23.154237   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:29.234226   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:32.306242   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:38.386218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:41.458157   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:47.538219   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:50.610223   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:56.690260   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:59.766215   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:05.842220   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:08.914154   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:14.994193   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:18.066232   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:21.070365   50624 start.go:369] acquired machines lock for "embed-certs-598346" in 3m44.734224905s
	I1207 21:15:21.070421   50624 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:21.070427   50624 fix.go:54] fixHost starting: 
	I1207 21:15:21.070755   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:21.070787   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:21.085298   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I1207 21:15:21.085643   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:21.086150   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:15:21.086172   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:21.086491   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:21.086681   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:21.086828   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:15:21.088256   50624 fix.go:102] recreateIfNeeded on embed-certs-598346: state=Stopped err=<nil>
	I1207 21:15:21.088283   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	W1207 21:15:21.088465   50624 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:21.090020   50624 out.go:177] * Restarting existing kvm2 VM for "embed-certs-598346" ...
	I1207 21:15:21.091364   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Start
	I1207 21:15:21.091521   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:15:21.092215   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:15:21.092551   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:15:21.092938   50624 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:15:21.093647   50624 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:15:21.067977   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:21.068024   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:15:21.070214   50270 machine.go:91] provisioned docker machine in 4m37.409386757s
	I1207 21:15:21.070272   50270 fix.go:56] fixHost completed within 4m37.430493841s
	I1207 21:15:21.070280   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 4m37.43051315s
	W1207 21:15:21.070299   50270 start.go:694] error starting host: provision: host is not running
	W1207 21:15:21.070399   50270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 21:15:21.070408   50270 start.go:709] Will try again in 5 seconds ...
	I1207 21:15:22.319220   50624 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:15:22.320059   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.320432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.320505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.320416   51516 retry.go:31] will retry after 306.732639ms: waiting for machine to come up
	I1207 21:15:22.629026   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.629495   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.629523   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.629465   51516 retry.go:31] will retry after 244.665765ms: waiting for machine to come up
	I1207 21:15:22.875896   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.876248   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.876275   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.876210   51516 retry.go:31] will retry after 389.522298ms: waiting for machine to come up
	I1207 21:15:23.267728   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.268119   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.268140   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.268064   51516 retry.go:31] will retry after 521.34699ms: waiting for machine to come up
	I1207 21:15:23.790614   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.791043   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.791067   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.791002   51516 retry.go:31] will retry after 493.71234ms: waiting for machine to come up
	I1207 21:15:24.286698   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:24.287121   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:24.287145   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:24.287061   51516 retry.go:31] will retry after 736.984501ms: waiting for machine to come up
	I1207 21:15:25.025941   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:25.026294   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:25.026317   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:25.026256   51516 retry.go:31] will retry after 1.06643424s: waiting for machine to come up
	I1207 21:15:26.093760   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:26.094266   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:26.094306   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:26.094211   51516 retry.go:31] will retry after 1.226791228s: waiting for machine to come up
	I1207 21:15:26.072827   50270 start.go:365] acquiring machines lock for old-k8s-version-483745: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:15:27.322536   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:27.322912   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:27.322940   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:27.322857   51516 retry.go:31] will retry after 1.246504696s: waiting for machine to come up
	I1207 21:15:28.571241   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:28.571651   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:28.571677   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:28.571606   51516 retry.go:31] will retry after 2.084958391s: waiting for machine to come up
	I1207 21:15:30.658654   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:30.659047   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:30.659080   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:30.658990   51516 retry.go:31] will retry after 2.104944011s: waiting for machine to come up
	I1207 21:15:32.765669   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:32.766136   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:32.766167   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:32.766076   51516 retry.go:31] will retry after 3.05038185s: waiting for machine to come up
	I1207 21:15:35.819082   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:35.819446   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:35.819477   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:35.819399   51516 retry.go:31] will retry after 3.445969037s: waiting for machine to come up
	I1207 21:15:40.686593   51037 start.go:369] acquired machines lock for "no-preload-950431" in 2m48.82697748s
	I1207 21:15:40.686639   51037 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:40.686646   51037 fix.go:54] fixHost starting: 
	I1207 21:15:40.687011   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:40.687043   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:40.703294   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I1207 21:15:40.703682   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:40.704245   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:15:40.704276   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:40.704620   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:40.704792   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:15:40.704938   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:15:40.706394   51037 fix.go:102] recreateIfNeeded on no-preload-950431: state=Stopped err=<nil>
	I1207 21:15:40.706420   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	W1207 21:15:40.706593   51037 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:40.709148   51037 out.go:177] * Restarting existing kvm2 VM for "no-preload-950431" ...
	I1207 21:15:39.269367   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269776   50624 main.go:141] libmachine: (embed-certs-598346) Found IP for machine: 192.168.72.180
	I1207 21:15:39.269802   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has current primary IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269808   50624 main.go:141] libmachine: (embed-certs-598346) Reserving static IP address...
	I1207 21:15:39.270234   50624 main.go:141] libmachine: (embed-certs-598346) Reserved static IP address: 192.168.72.180
	I1207 21:15:39.270265   50624 main.go:141] libmachine: (embed-certs-598346) Waiting for SSH to be available...
	I1207 21:15:39.270279   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.270308   50624 main.go:141] libmachine: (embed-certs-598346) DBG | skip adding static IP to network mk-embed-certs-598346 - found existing host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"}
	I1207 21:15:39.270325   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Getting to WaitForSSH function...
	I1207 21:15:39.272292   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272639   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.272674   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272773   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH client type: external
	I1207 21:15:39.272827   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa (-rw-------)
	I1207 21:15:39.272869   50624 main.go:141] libmachine: (embed-certs-598346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:15:39.272887   50624 main.go:141] libmachine: (embed-certs-598346) DBG | About to run SSH command:
	I1207 21:15:39.272903   50624 main.go:141] libmachine: (embed-certs-598346) DBG | exit 0
	I1207 21:15:39.363326   50624 main.go:141] libmachine: (embed-certs-598346) DBG | SSH cmd err, output: <nil>: 
	I1207 21:15:39.363757   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:15:39.364301   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.366828   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367157   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.367206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367459   50624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:15:39.367693   50624 machine.go:88] provisioning docker machine ...
	I1207 21:15:39.367713   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:39.367918   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368085   50624 buildroot.go:166] provisioning hostname "embed-certs-598346"
	I1207 21:15:39.368104   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368241   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.370443   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.370771   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.370798   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.371044   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.371192   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371358   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371507   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.371660   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.372058   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.372078   50624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-598346 && echo "embed-certs-598346" | sudo tee /etc/hostname
	I1207 21:15:39.498370   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-598346
	
	I1207 21:15:39.498394   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.501284   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501691   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.501711   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501952   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.502135   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502267   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502432   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.502604   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.503052   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.503091   50624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-598346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-598346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-598346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:15:39.625683   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:39.625713   50624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:15:39.625735   50624 buildroot.go:174] setting up certificates
	I1207 21:15:39.625748   50624 provision.go:83] configureAuth start
	I1207 21:15:39.625760   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.626074   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.628753   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629102   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.629125   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629277   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.631206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631478   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.631507   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631632   50624 provision.go:138] copyHostCerts
	I1207 21:15:39.631682   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:15:39.631698   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:15:39.631763   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:15:39.631844   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:15:39.631852   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:15:39.631874   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:15:39.631922   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:15:39.631928   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:15:39.631951   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:15:39.631993   50624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.embed-certs-598346 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-598346]
	I1207 21:15:39.968036   50624 provision.go:172] copyRemoteCerts
	I1207 21:15:39.968098   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:15:39.968121   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.970937   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971356   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.971386   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971627   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.971847   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.972010   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.972148   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.060156   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:15:40.082673   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 21:15:40.104263   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:15:40.125974   50624 provision.go:86] duration metric: configureAuth took 500.211549ms
	I1207 21:15:40.126012   50624 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:15:40.126233   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:15:40.126317   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.129108   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129484   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.129505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129662   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.129884   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130039   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130197   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.130358   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.130677   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.130698   50624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:15:40.439407   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:15:40.439438   50624 machine.go:91] provisioned docker machine in 1.071729841s
	I1207 21:15:40.439451   50624 start.go:300] post-start starting for "embed-certs-598346" (driver="kvm2")
	I1207 21:15:40.439465   50624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:15:40.439504   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.439827   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:15:40.439860   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.442750   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443135   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.443160   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443400   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.443623   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.443811   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.443974   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.531350   50624 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:15:40.535614   50624 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:15:40.535644   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:15:40.535720   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:15:40.535813   50624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:15:40.535938   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:15:40.543981   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:40.566714   50624 start.go:303] post-start completed in 127.248268ms
	I1207 21:15:40.566739   50624 fix.go:56] fixHost completed within 19.496310567s
	I1207 21:15:40.566763   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.569439   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569774   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.569791   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569915   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.570085   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570257   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570386   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.570534   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.570842   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.570855   50624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:15:40.686455   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983740.637211698
	
	I1207 21:15:40.686479   50624 fix.go:206] guest clock: 1701983740.637211698
	I1207 21:15:40.686486   50624 fix.go:219] Guest: 2023-12-07 21:15:40.637211698 +0000 UTC Remote: 2023-12-07 21:15:40.566742665 +0000 UTC m=+244.381466877 (delta=70.469033ms)
	I1207 21:15:40.686503   50624 fix.go:190] guest clock delta is within tolerance: 70.469033ms
	I1207 21:15:40.686508   50624 start.go:83] releasing machines lock for "embed-certs-598346", held for 19.61610992s
	I1207 21:15:40.686533   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.686809   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:40.689665   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690046   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.690069   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690242   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690903   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690988   50624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:15:40.691035   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.691162   50624 ssh_runner.go:195] Run: cat /version.json
	I1207 21:15:40.691196   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.693712   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.693943   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694078   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694106   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694269   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694295   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694333   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694419   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694501   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694580   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694742   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.694816   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694925   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.801618   50624 ssh_runner.go:195] Run: systemctl --version
	I1207 21:15:40.807496   50624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:15:40.967288   50624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:15:40.974223   50624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:15:40.974315   50624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:15:40.988391   50624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:15:40.988418   50624 start.go:475] detecting cgroup driver to use...
	I1207 21:15:40.988510   50624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:15:41.002379   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:15:41.016074   50624 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:15:41.016125   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:15:41.031096   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:15:41.044808   50624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:15:41.150630   50624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:15:40.710656   51037 main.go:141] libmachine: (no-preload-950431) Calling .Start
	I1207 21:15:40.710832   51037 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:15:40.711509   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:15:40.711813   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:15:40.712201   51037 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:15:40.712860   51037 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:15:41.269009   50624 docker.go:219] disabling docker service ...
	I1207 21:15:41.269067   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:15:41.281800   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:15:41.293694   50624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:15:41.413774   50624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:15:41.523960   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:15:41.536474   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:15:41.553611   50624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:15:41.553668   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.562741   50624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:15:41.562831   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.571841   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.580887   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.590259   50624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:15:41.599349   50624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:15:41.607259   50624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:15:41.607314   50624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:15:41.619425   50624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:15:41.627826   50624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:15:41.736815   50624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:15:41.896418   50624 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:15:41.896505   50624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:15:41.901539   50624 start.go:543] Will wait 60s for crictl version
	I1207 21:15:41.901598   50624 ssh_runner.go:195] Run: which crictl
	I1207 21:15:41.905454   50624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:15:41.942196   50624 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:15:41.942267   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:41.986024   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:42.034806   50624 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:15:42.036352   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:42.039304   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039704   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:42.039745   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039930   50624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1207 21:15:42.043951   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:42.056473   50624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:15:42.056535   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:42.099359   50624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:15:42.099459   50624 ssh_runner.go:195] Run: which lz4
	I1207 21:15:42.103324   50624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:15:42.107440   50624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:15:42.107476   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:15:44.063941   50624 crio.go:444] Took 1.960653 seconds to copy over tarball
	I1207 21:15:44.064018   50624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:15:41.955586   51037 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:15:41.956530   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:41.956967   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:41.957004   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:41.956919   51634 retry.go:31] will retry after 266.143384ms: waiting for machine to come up
	I1207 21:15:42.224547   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.225112   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.225142   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.225060   51634 retry.go:31] will retry after 314.364486ms: waiting for machine to come up
	I1207 21:15:42.540722   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.541264   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.541294   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.541225   51634 retry.go:31] will retry after 447.845741ms: waiting for machine to come up
	I1207 21:15:42.990858   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.991283   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.991310   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.991246   51634 retry.go:31] will retry after 494.509595ms: waiting for machine to come up
	I1207 21:15:43.487745   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:43.488268   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:43.488305   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:43.488218   51634 retry.go:31] will retry after 517.471464ms: waiting for machine to come up
	I1207 21:15:44.007846   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.008291   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.008322   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.008247   51634 retry.go:31] will retry after 755.53339ms: waiting for machine to come up
	I1207 21:15:44.765367   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.765799   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.765827   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.765743   51634 retry.go:31] will retry after 947.674862ms: waiting for machine to come up
	I1207 21:15:45.715436   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:45.715859   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:45.715890   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:45.715811   51634 retry.go:31] will retry after 1.304063218s: waiting for machine to come up
	I1207 21:15:47.049597   50624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.985550761s)
	I1207 21:15:47.049622   50624 crio.go:451] Took 2.985655 seconds to extract the tarball
	I1207 21:15:47.049632   50624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:15:47.089358   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:47.145982   50624 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:15:47.146007   50624 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:15:47.146069   50624 ssh_runner.go:195] Run: crio config
	I1207 21:15:47.205864   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:15:47.205888   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:15:47.205904   50624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:15:47.205933   50624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-598346 NodeName:embed-certs-598346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:15:47.206106   50624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-598346"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:15:47.206189   50624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-598346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:15:47.206249   50624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:15:47.214998   50624 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:15:47.215065   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:15:47.223252   50624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 21:15:47.239698   50624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:15:47.258476   50624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1207 21:15:47.275957   50624 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1207 21:15:47.279689   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:47.295204   50624 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346 for IP: 192.168.72.180
	I1207 21:15:47.295234   50624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:15:47.295391   50624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:15:47.295436   50624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:15:47.295501   50624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/client.key
	I1207 21:15:47.295552   50624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key.379caec1
	I1207 21:15:47.295589   50624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key
	I1207 21:15:47.295686   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:15:47.295712   50624 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:15:47.295722   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:15:47.295748   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:15:47.295772   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:15:47.295795   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:15:47.295835   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:47.296438   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:15:47.324057   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:15:47.350921   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:15:47.378603   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:15:47.405443   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:15:47.429942   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:15:47.455437   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:15:47.478735   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:15:47.503326   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:15:47.525886   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:15:47.549414   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:15:47.572018   50624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:15:47.590990   50624 ssh_runner.go:195] Run: openssl version
	I1207 21:15:47.597874   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:15:47.610087   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615875   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615949   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.622941   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:15:47.632217   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:15:47.641323   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645877   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645955   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.651452   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:15:47.660848   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:15:47.670225   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674620   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674670   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.680118   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:15:47.689444   50624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:15:47.693775   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:15:47.699741   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:15:47.705442   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:15:47.710938   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:15:47.716367   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:15:47.721958   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:15:47.727403   50624 kubeadm.go:404] StartCluster: {Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:15:47.727520   50624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:15:47.727599   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:47.771682   50624 cri.go:89] found id: ""
	I1207 21:15:47.771763   50624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:15:47.782923   50624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:15:47.782946   50624 kubeadm.go:636] restartCluster start
	I1207 21:15:47.783020   50624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:15:47.791494   50624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.792645   50624 kubeconfig.go:92] found "embed-certs-598346" server: "https://192.168.72.180:8443"
	I1207 21:15:47.794953   50624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:15:47.804014   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.804096   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.815412   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.815433   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.815503   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.825646   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.326356   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.326438   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.338771   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.826334   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.826405   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.837498   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.325998   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.326084   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.338197   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.825701   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.825821   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.842649   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.326181   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.326277   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.341560   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.826087   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.826183   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.841186   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.021061   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:47.021495   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:47.021519   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:47.021459   51634 retry.go:31] will retry after 1.183999845s: waiting for machine to come up
	I1207 21:15:48.206768   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:48.207222   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:48.207250   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:48.207183   51634 retry.go:31] will retry after 1.595211966s: waiting for machine to come up
	I1207 21:15:49.804832   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:49.805298   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:49.805328   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:49.805229   51634 retry.go:31] will retry after 2.126345359s: waiting for machine to come up
	I1207 21:15:51.325994   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.326083   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.338573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.826180   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.826253   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.837573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.326115   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.326192   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.336984   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.826590   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.826681   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.837678   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.326205   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.326279   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.337579   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.826047   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.826145   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.840263   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.325765   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.325842   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.337452   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.825969   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.826063   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.837428   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.325968   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.326060   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.337128   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.826749   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.826832   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.838002   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.933915   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:51.934338   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:51.934372   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:51.934279   51634 retry.go:31] will retry after 2.448139802s: waiting for machine to come up
	I1207 21:15:54.384038   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:54.384399   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:54.384425   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:54.384351   51634 retry.go:31] will retry after 3.211975182s: waiting for machine to come up
	I1207 21:15:56.325893   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.326007   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.337698   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:56.825827   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.825964   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.836945   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.326560   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:57.326637   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:57.337299   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.804902   50624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:15:57.804933   50624 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:15:57.804946   50624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:15:57.805023   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:57.846788   50624 cri.go:89] found id: ""
	I1207 21:15:57.846877   50624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:15:57.861513   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:15:57.869730   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:15:57.869781   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877777   50624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877801   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:57.992244   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:58.878385   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.051985   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.136414   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.232261   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:15:59.232358   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.246262   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.760617   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.260132   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.760723   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:57.599056   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:57.599417   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:57.599444   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:57.599382   51634 retry.go:31] will retry after 5.532381184s: waiting for machine to come up
	I1207 21:16:04.442905   51113 start.go:369] acquired machines lock for "default-k8s-diff-port-275828" in 3m9.513966804s
	I1207 21:16:04.442972   51113 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:04.442985   51113 fix.go:54] fixHost starting: 
	I1207 21:16:04.443390   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:04.443434   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:04.460087   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1207 21:16:04.460495   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:04.460991   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:04.461014   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:04.461405   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:04.461582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:04.461705   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:04.463304   51113 fix.go:102] recreateIfNeeded on default-k8s-diff-port-275828: state=Stopped err=<nil>
	I1207 21:16:04.463337   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	W1207 21:16:04.463494   51113 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:04.465895   51113 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-275828" ...
	I1207 21:16:04.467328   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Start
	I1207 21:16:04.467485   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring networks are active...
	I1207 21:16:04.468206   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network default is active
	I1207 21:16:04.468581   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network mk-default-k8s-diff-port-275828 is active
	I1207 21:16:04.468943   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Getting domain xml...
	I1207 21:16:04.469483   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Creating domain...
	I1207 21:16:03.134233   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134762   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134794   51037 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:16:03.134811   51037 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:16:03.135186   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.135209   51037 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:16:03.135230   51037 main.go:141] libmachine: (no-preload-950431) DBG | skip adding static IP to network mk-no-preload-950431 - found existing host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"}
	I1207 21:16:03.135251   51037 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:16:03.135265   51037 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:16:03.137331   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137662   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.137689   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137792   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:16:03.137817   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:16:03.137854   51037 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:03.137871   51037 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:16:03.137890   51037 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:16:03.229593   51037 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:03.230019   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:16:03.230604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.233069   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233426   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.233462   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233661   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:16:03.233837   51037 machine.go:88] provisioning docker machine ...
	I1207 21:16:03.233855   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:03.234081   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234254   51037 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:16:03.234277   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234386   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.236593   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.236859   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.236892   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.237079   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.237243   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237396   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237522   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.237653   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.238000   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.238016   51037 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:16:03.374959   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:16:03.374999   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.377825   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378212   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.378247   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378389   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.378604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378763   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.379041   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.379363   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.379399   51037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:03.510050   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:03.510081   51037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:03.510109   51037 buildroot.go:174] setting up certificates
	I1207 21:16:03.510119   51037 provision.go:83] configureAuth start
	I1207 21:16:03.510130   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.510367   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.512754   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513120   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.513151   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513289   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.515546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.515894   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.515947   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.516066   51037 provision.go:138] copyHostCerts
	I1207 21:16:03.516119   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:03.516138   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:03.516206   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:03.516294   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:03.516303   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:03.516328   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:03.516398   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:03.516406   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:03.516430   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:03.516480   51037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:16:03.662663   51037 provision.go:172] copyRemoteCerts
	I1207 21:16:03.662732   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:03.662756   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.665043   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665344   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.665379   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.665713   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.665887   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.666049   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:03.757956   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:03.782348   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:16:03.806388   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:16:03.831058   51037 provision.go:86] duration metric: configureAuth took 320.927373ms
	I1207 21:16:03.831086   51037 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:03.831264   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:16:03.831365   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.834104   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834489   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.834535   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834703   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.834901   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835087   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835224   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.835370   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.835699   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.835721   51037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:04.154758   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:04.154783   51037 machine.go:91] provisioned docker machine in 920.933844ms
	I1207 21:16:04.154795   51037 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:16:04.154810   51037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:04.154829   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.155148   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:04.155173   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.157776   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158131   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.158163   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158336   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.158560   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.158733   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.158873   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.258325   51037 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:04.262930   51037 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:04.262950   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:04.263011   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:04.263077   51037 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:04.263177   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:04.271602   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:04.303816   51037 start.go:303] post-start completed in 148.990598ms
	I1207 21:16:04.303849   51037 fix.go:56] fixHost completed within 23.617201529s
	I1207 21:16:04.303873   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.306576   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.306930   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.306962   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.307104   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.307326   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307458   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307591   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.307773   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:04.308242   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:04.308260   51037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:04.442724   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983764.388433819
	
	I1207 21:16:04.442748   51037 fix.go:206] guest clock: 1701983764.388433819
	I1207 21:16:04.442757   51037 fix.go:219] Guest: 2023-12-07 21:16:04.388433819 +0000 UTC Remote: 2023-12-07 21:16:04.303852803 +0000 UTC m=+192.597462932 (delta=84.581016ms)
	I1207 21:16:04.442797   51037 fix.go:190] guest clock delta is within tolerance: 84.581016ms
	I1207 21:16:04.442801   51037 start.go:83] releasing machines lock for "no-preload-950431", held for 23.756181397s
	I1207 21:16:04.442827   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.443065   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:04.446137   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446578   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.446612   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446797   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447413   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447656   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447732   51037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:04.447783   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.447902   51037 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:04.447923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.450882   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451025   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451253   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451280   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451470   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451481   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451507   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451654   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.451720   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452043   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.452098   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.452561   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452761   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.565982   51037 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:04.573821   51037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:04.741571   51037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:04.749951   51037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:04.750038   51037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:04.770148   51037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:04.770176   51037 start.go:475] detecting cgroup driver to use...
	I1207 21:16:04.770244   51037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:04.787798   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:04.802346   51037 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:04.802415   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:04.819638   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:04.836910   51037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:04.947330   51037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:05.087698   51037 docker.go:219] disabling docker service ...
	I1207 21:16:05.087794   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:05.104790   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:05.122187   51037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:05.252225   51037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:05.394598   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:05.408596   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:05.429804   51037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:05.429876   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.441617   51037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:05.441700   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.452787   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.462684   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.472827   51037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:05.485493   51037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:05.495282   51037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:05.495367   51037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:05.512972   51037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:05.523817   51037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:05.674940   51037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:05.866827   51037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:05.866913   51037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:05.873044   51037 start.go:543] Will wait 60s for crictl version
	I1207 21:16:05.873109   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:05.878484   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:05.919888   51037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:05.919979   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:05.976795   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:06.034745   51037 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:16:01.260865   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.760580   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.790951   50624 api_server.go:72] duration metric: took 2.55868777s to wait for apiserver process to appear ...
	I1207 21:16:01.790981   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:01.791000   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.338427   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:05.338467   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:05.338483   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.436356   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.436385   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:05.937143   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.943626   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.943656   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.036269   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:06.039546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.039919   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:06.039968   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.040205   51037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:06.044899   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:06.061053   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:16:06.061106   51037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:06.099113   51037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:16:06.099136   51037 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:06.099196   51037 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.099225   51037 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.099246   51037 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:16:06.099283   51037 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.099314   51037 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.099229   51037 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.099419   51037 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.099484   51037 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100960   51037 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.100961   51037 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.101035   51037 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.100973   51037 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.234869   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.272014   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.275605   51037 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:16:06.275659   51037 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.275716   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.295068   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.329385   51037 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:16:06.329435   51037 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.329449   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.329486   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.356701   51037 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:16:06.356744   51037 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.356790   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.382536   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:16:06.389671   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.391917   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.399801   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.399908   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.399980   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.400067   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.409081   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.616824   51037 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:16:06.616864   51037 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:16:06.616876   51037 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.616884   51037 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.616923   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.616930   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.617038   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617075   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1207 21:16:06.617086   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617114   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617122   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617199   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:16:06.617272   51037 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:16:06.617286   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:06.617305   51037 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.617353   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.631975   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.632094   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1207 21:16:06.632181   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.436900   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.457077   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:06.457122   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.936534   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.943658   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:16:06.952206   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:06.952239   50624 api_server.go:131] duration metric: took 5.161250619s to wait for apiserver health ...
	I1207 21:16:06.952251   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:16:06.952259   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:06.954179   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:05.844251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting to get IP...
	I1207 21:16:05.845419   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845793   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845896   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:05.845790   51802 retry.go:31] will retry after 224.053393ms: waiting for machine to come up
	I1207 21:16:06.071071   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071521   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071545   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.071464   51802 retry.go:31] will retry after 272.776477ms: waiting for machine to come up
	I1207 21:16:06.346126   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346739   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346773   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.346683   51802 retry.go:31] will retry after 373.022784ms: waiting for machine to come up
	I1207 21:16:06.721567   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722089   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722115   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.722029   51802 retry.go:31] will retry after 380.100559ms: waiting for machine to come up
	I1207 21:16:07.103408   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103853   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103884   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.103798   51802 retry.go:31] will retry after 473.24776ms: waiting for machine to come up
	I1207 21:16:07.578548   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579087   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.579176   51802 retry.go:31] will retry after 892.826082ms: waiting for machine to come up
	I1207 21:16:08.473531   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474027   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474058   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:08.473989   51802 retry.go:31] will retry after 1.042648737s: waiting for machine to come up
	I1207 21:16:09.518823   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519363   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:09.519213   51802 retry.go:31] will retry after 948.481622ms: waiting for machine to come up
	I1207 21:16:06.955727   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:06.967724   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:06.990163   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:07.001387   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:07.001425   50624 system_pods.go:61] "coredns-5dd5756b68-hlpsb" [c1f9f7db-0741-483c-9e39-d6f0ce4715d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:07.001436   50624 system_pods.go:61] "etcd-embed-certs-598346" [acda3700-87a2-4442-94e6-1d17288e7cee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:07.001446   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [e1439056-061b-4add-a399-c55a816fba70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:07.001456   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [b4c80c36-da2c-4c46-b655-3c6bb2a96ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:07.001466   50624 system_pods.go:61] "kube-proxy-jqhnn" [e2635205-e67a-4b56-a7b4-82fe97b5fe7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:07.001490   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [3b90e1d4-9c0f-46e4-a7b7-5e42717a8b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:07.001499   50624 system_pods.go:61] "metrics-server-57f55c9bc5-sndh4" [9a052ce0-760f-4cfd-a958-971daa14ea02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:07.001511   50624 system_pods.go:61] "storage-provisioner" [bf244954-a1d7-4b51-9085-387e60d02792] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:07.001524   50624 system_pods.go:74] duration metric: took 11.336763ms to wait for pod list to return data ...
	I1207 21:16:07.001538   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:07.007697   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:07.007737   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:07.007752   50624 node_conditions.go:105] duration metric: took 6.207447ms to run NodePressure ...
	I1207 21:16:07.007770   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:07.287760   50624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297260   50624 kubeadm.go:787] kubelet initialised
	I1207 21:16:07.297285   50624 kubeadm.go:788] duration metric: took 9.495153ms waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297296   50624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:07.304800   50624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.313488   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313523   50624 pod_ready.go:81] duration metric: took 8.689063ms waiting for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.313535   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313545   50624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.321603   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321637   50624 pod_ready.go:81] duration metric: took 8.078752ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.321649   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321658   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.333040   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333068   50624 pod_ready.go:81] duration metric: took 11.399287ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.333081   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333089   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.397606   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397632   50624 pod_ready.go:81] duration metric: took 64.53373ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.397642   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397648   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713161   50624 pod_ready.go:92] pod "kube-proxy-jqhnn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:08.713188   50624 pod_ready.go:81] duration metric: took 1.315530906s waiting for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713201   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:10.919896   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:07.059825   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061030   51037 ssh_runner.go:235] Completed: which crictl: (3.443650725s)
	I1207 21:16:10.061121   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:10.061130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (3.443992158s)
	I1207 21:16:10.061160   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1207 21:16:10.061174   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.444033736s)
	I1207 21:16:10.061199   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:16:10.061225   51037 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061245   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1: (3.429236441s)
	I1207 21:16:10.061286   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061294   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061296   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.429094571s)
	I1207 21:16:10.061330   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:16:10.061346   51037 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.001491955s)
	I1207 21:16:10.061361   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061387   51037 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:16:10.061402   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:10.061430   51037 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061469   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:10.469685   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470224   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:10.470187   51802 retry.go:31] will retry after 1.846436384s: waiting for machine to come up
	I1207 21:16:12.319116   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319558   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319590   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:12.319512   51802 retry.go:31] will retry after 1.415005437s: waiting for machine to come up
	I1207 21:16:13.736082   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736599   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736630   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:13.736533   51802 retry.go:31] will retry after 2.499952402s: waiting for machine to come up
	I1207 21:16:13.413966   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:15.414181   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:14.287122   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.225788884s)
	I1207 21:16:14.287166   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1207 21:16:14.287165   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: (4.226018563s)
	I1207 21:16:14.287190   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.287204   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.225706156s)
	I1207 21:16:14.287208   51037 ssh_runner.go:235] Completed: which crictl: (4.225716226s)
	I1207 21:16:14.287294   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287310   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (4.225934747s)
	I1207 21:16:14.287322   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1207 21:16:14.287325   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:14.287270   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1207 21:16:14.287238   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.338957   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:16:14.339087   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:16.589704   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.302291312s)
	I1207 21:16:16.589740   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:16:16.589764   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589777   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.302463063s)
	I1207 21:16:16.589816   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589817   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1207 21:16:16.589887   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.250737859s)
	I1207 21:16:16.589912   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1207 21:16:16.238979   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239340   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239367   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:16.239304   51802 retry.go:31] will retry after 2.478988074s: waiting for machine to come up
	I1207 21:16:18.720359   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720892   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720925   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:18.720840   51802 retry.go:31] will retry after 4.119588433s: waiting for machine to come up
	I1207 21:16:17.913477   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.407386   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:18.407417   50624 pod_ready.go:81] duration metric: took 9.694207323s waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:18.407431   50624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:20.429952   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.142546   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (1.552699587s)
	I1207 21:16:18.142620   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:16:18.142658   51037 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:18.142737   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:20.432330   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.289556402s)
	I1207 21:16:20.432358   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:16:20.432386   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:20.432436   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:22.843120   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843516   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843540   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:22.843470   51802 retry.go:31] will retry after 3.969701228s: waiting for machine to come up
	I1207 21:16:22.431295   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:24.929166   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:22.891954   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.459495307s)
	I1207 21:16:22.891978   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1207 21:16:22.892001   51037 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:22.892056   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:23.742939   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 21:16:23.743011   51037 cache_images.go:123] Successfully loaded all cached images
	I1207 21:16:23.743021   51037 cache_images.go:92] LoadImages completed in 17.643875393s
	I1207 21:16:23.743107   51037 ssh_runner.go:195] Run: crio config
	I1207 21:16:23.802064   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:23.802087   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:23.802106   51037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:23.802128   51037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950431 NodeName:no-preload-950431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:23.802258   51037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:23.802329   51037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-950431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:23.802382   51037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:16:23.813052   51037 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:23.813143   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:23.823249   51037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1207 21:16:23.840999   51037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:16:23.857599   51037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1207 21:16:23.873664   51037 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:23.877208   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:23.888109   51037 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431 for IP: 192.168.50.100
	I1207 21:16:23.888148   51037 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:23.888298   51037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:23.888333   51037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:23.888394   51037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:16:23.888453   51037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key.8f36cd02
	I1207 21:16:23.888490   51037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key
	I1207 21:16:23.888598   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:23.888626   51037 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:23.888638   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:23.888669   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:23.888701   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:23.888725   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:23.888769   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:23.889405   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:23.911313   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:16:23.935796   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:23.960576   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:23.983952   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:24.005755   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:24.027232   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:24.049398   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:24.073975   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:24.097326   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:24.118396   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:24.140590   51037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:24.157287   51037 ssh_runner.go:195] Run: openssl version
	I1207 21:16:24.163079   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:24.173618   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.177973   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.178038   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.183537   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:24.193750   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:24.203836   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208278   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208324   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.213906   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:24.223939   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:24.234037   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238379   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238443   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.243650   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:24.253904   51037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:24.258343   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:24.264011   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:24.269609   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:24.275294   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:24.280969   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:24.286763   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:24.292414   51037 kubeadm.go:404] StartCluster: {Name:no-preload-950431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:24.292505   51037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:24.292565   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:24.342426   51037 cri.go:89] found id: ""
	I1207 21:16:24.342596   51037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:24.353900   51037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:24.353939   51037 kubeadm.go:636] restartCluster start
	I1207 21:16:24.353999   51037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:24.363465   51037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.364722   51037 kubeconfig.go:92] found "no-preload-950431" server: "https://192.168.50.100:8443"
	I1207 21:16:24.367198   51037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:24.378918   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.378971   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.391331   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.391354   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.391393   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.403003   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.903722   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.903814   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.915891   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.403459   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.403568   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.415677   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.903683   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.903765   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.915474   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:26.403146   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.403258   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.414072   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.031043   50270 start.go:369] acquired machines lock for "old-k8s-version-483745" in 1m1.958159244s
	I1207 21:16:28.031117   50270 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:28.031127   50270 fix.go:54] fixHost starting: 
	I1207 21:16:28.031477   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:28.031504   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:28.047757   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1207 21:16:28.048134   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:28.048598   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:16:28.048628   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:28.048962   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:28.049123   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:28.049278   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:16:28.050698   50270 fix.go:102] recreateIfNeeded on old-k8s-version-483745: state=Stopped err=<nil>
	I1207 21:16:28.050716   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	W1207 21:16:28.050943   50270 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:28.053462   50270 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-483745" ...
	I1207 21:16:28.054995   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Start
	I1207 21:16:28.055169   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring networks are active...
	I1207 21:16:28.055803   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network default is active
	I1207 21:16:28.056167   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network mk-old-k8s-version-483745 is active
	I1207 21:16:28.056613   50270 main.go:141] libmachine: (old-k8s-version-483745) Getting domain xml...
	I1207 21:16:28.057267   50270 main.go:141] libmachine: (old-k8s-version-483745) Creating domain...
	I1207 21:16:26.815724   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816306   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Found IP for machine: 192.168.39.254
	I1207 21:16:26.816346   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserving static IP address...
	I1207 21:16:26.816373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has current primary IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816843   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.816874   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserved static IP address: 192.168.39.254
	I1207 21:16:26.816895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | skip adding static IP to network mk-default-k8s-diff-port-275828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"}
	I1207 21:16:26.816916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Getting to WaitForSSH function...
	I1207 21:16:26.816933   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for SSH to be available...
	I1207 21:16:26.819265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819625   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.819654   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819808   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH client type: external
	I1207 21:16:26.819840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa (-rw-------)
	I1207 21:16:26.819880   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:26.819908   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | About to run SSH command:
	I1207 21:16:26.819930   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | exit 0
	I1207 21:16:26.913932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:26.914232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetConfigRaw
	I1207 21:16:26.915043   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:26.917486   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.917899   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.917944   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.918182   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:16:26.918360   51113 machine.go:88] provisioning docker machine ...
	I1207 21:16:26.918380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:26.918587   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918775   51113 buildroot.go:166] provisioning hostname "default-k8s-diff-port-275828"
	I1207 21:16:26.918805   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918971   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:26.921227   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921482   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.921515   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921657   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:26.921818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922006   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922162   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:26.922317   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:26.922695   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:26.922713   51113 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-275828 && echo "default-k8s-diff-port-275828" | sudo tee /etc/hostname
	I1207 21:16:27.066745   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-275828
	
	I1207 21:16:27.066778   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.069493   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.069842   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.069895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.070078   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.070295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070446   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.070824   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.071271   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.071302   51113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-275828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-275828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-275828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:27.206475   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:27.206503   51113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:27.206534   51113 buildroot.go:174] setting up certificates
	I1207 21:16:27.206545   51113 provision.go:83] configureAuth start
	I1207 21:16:27.206553   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:27.206818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:27.209295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209632   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.209666   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209763   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.211882   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212147   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.212176   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212250   51113 provision.go:138] copyHostCerts
	I1207 21:16:27.212306   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:27.212326   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:27.212396   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:27.212501   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:27.212511   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:27.212540   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:27.212617   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:27.212627   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:27.212656   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:27.212728   51113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-275828 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube default-k8s-diff-port-275828]
	I1207 21:16:27.273212   51113 provision.go:172] copyRemoteCerts
	I1207 21:16:27.273291   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:27.273321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.275905   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276185   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.276219   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.276569   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.276703   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.276814   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.371834   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:27.394096   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 21:16:27.416619   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:27.443103   51113 provision.go:86] duration metric: configureAuth took 236.548224ms
	I1207 21:16:27.443127   51113 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:27.443336   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:27.443406   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.446005   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446303   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.446334   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446477   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.446648   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446959   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.447158   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.447600   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.447623   51113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:27.760539   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:27.760582   51113 machine.go:91] provisioned docker machine in 842.207987ms
	I1207 21:16:27.760608   51113 start.go:300] post-start starting for "default-k8s-diff-port-275828" (driver="kvm2")
	I1207 21:16:27.760617   51113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:27.760633   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:27.760993   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:27.761030   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.763527   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.763923   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.763968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.764077   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.764254   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.764386   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.764559   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.860772   51113 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:27.865258   51113 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:27.865285   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:27.865348   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:27.865422   51113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:27.865537   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:27.874901   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:27.896890   51113 start.go:303] post-start completed in 136.257327ms
	I1207 21:16:27.896912   51113 fix.go:56] fixHost completed within 23.453929111s
	I1207 21:16:27.896932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.899422   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899740   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.899780   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.900104   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900400   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.900601   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.900920   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.900935   51113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:28.030917   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983787.976128099
	
	I1207 21:16:28.030936   51113 fix.go:206] guest clock: 1701983787.976128099
	I1207 21:16:28.030943   51113 fix.go:219] Guest: 2023-12-07 21:16:27.976128099 +0000 UTC Remote: 2023-12-07 21:16:27.896915587 +0000 UTC m=+213.119643923 (delta=79.212512ms)
	I1207 21:16:28.030970   51113 fix.go:190] guest clock delta is within tolerance: 79.212512ms
	I1207 21:16:28.030975   51113 start.go:83] releasing machines lock for "default-k8s-diff-port-275828", held for 23.588040931s
	I1207 21:16:28.031003   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.031255   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:28.033864   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034277   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.034318   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034501   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035101   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035283   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035354   51113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:28.035399   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.035519   51113 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:28.035543   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.038353   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038570   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038636   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.038675   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.038993   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039013   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.039035   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.039152   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039189   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.039319   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.039368   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039495   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039619   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.161850   51113 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:28.167540   51113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:28.311477   51113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:28.319102   51113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:28.319177   51113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:28.334118   51113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:28.334138   51113 start.go:475] detecting cgroup driver to use...
	I1207 21:16:28.334187   51113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:28.351563   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:28.364950   51113 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:28.365015   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:28.380367   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:28.396070   51113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:28.504230   51113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:28.634829   51113 docker.go:219] disabling docker service ...
	I1207 21:16:28.634893   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:28.648955   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:28.660615   51113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:28.781577   51113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:28.899307   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:28.912673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:28.931310   51113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:28.931384   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.941006   51113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:28.941083   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.951712   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.963062   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.973981   51113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:28.984828   51113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:28.993884   51113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:28.993992   51113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:29.007812   51113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:29.017781   51113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:29.147958   51113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:29.329720   51113 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:29.329781   51113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:29.336048   51113 start.go:543] Will wait 60s for crictl version
	I1207 21:16:29.336109   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:16:29.340075   51113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:29.378207   51113 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:29.378289   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.438034   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.487899   51113 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:16:29.489336   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:29.492387   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.492824   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:29.492858   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.493105   51113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:29.497882   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:29.510857   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:16:29.510910   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:29.557513   51113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:16:29.557590   51113 ssh_runner.go:195] Run: which lz4
	I1207 21:16:29.561849   51113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:29.566351   51113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:29.566383   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:16:26.930512   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:29.442726   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:26.903645   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.903716   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.915728   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.403874   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.403939   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.415501   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.904082   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.904150   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.916404   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.404050   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.404143   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.416757   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.903144   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.903202   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.914709   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.403236   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.403324   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.415595   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.903823   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.903908   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.920093   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.403786   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.403864   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.417374   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.903246   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.903335   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.916333   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:31.403909   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.403984   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.418792   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.352362   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting to get IP...
	I1207 21:16:29.353395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.353871   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.353965   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.353847   51971 retry.go:31] will retry after 307.502031ms: waiting for machine to come up
	I1207 21:16:29.663412   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.663958   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.663990   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.663889   51971 retry.go:31] will retry after 328.013518ms: waiting for machine to come up
	I1207 21:16:29.993550   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.994129   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.994160   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.994066   51971 retry.go:31] will retry after 315.323859ms: waiting for machine to come up
	I1207 21:16:30.310570   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.311106   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.311139   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.311055   51971 retry.go:31] will retry after 547.317149ms: waiting for machine to come up
	I1207 21:16:30.859753   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.860500   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.860532   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.860479   51971 retry.go:31] will retry after 591.81737ms: waiting for machine to come up
	I1207 21:16:31.453939   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:31.454481   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:31.454508   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:31.454426   51971 retry.go:31] will retry after 818.736684ms: waiting for machine to come up
	I1207 21:16:32.274582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:32.275065   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:32.275100   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:32.275018   51971 retry.go:31] will retry after 865.865666ms: waiting for machine to come up
	I1207 21:16:33.142356   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:33.142713   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:33.142748   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:33.142655   51971 retry.go:31] will retry after 1.270743306s: waiting for machine to come up
	I1207 21:16:31.473652   51113 crio.go:444] Took 1.911834 seconds to copy over tarball
	I1207 21:16:31.473729   51113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:34.448164   51113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974406678s)
	I1207 21:16:34.448185   51113 crio.go:451] Took 2.974507 seconds to extract the tarball
	I1207 21:16:34.448196   51113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:34.493579   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:34.555669   51113 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:16:34.555694   51113 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:16:34.555760   51113 ssh_runner.go:195] Run: crio config
	I1207 21:16:34.637813   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:34.637855   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:34.637874   51113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:34.637909   51113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-275828 NodeName:default-k8s-diff-port-275828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:34.638088   51113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-275828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:34.638186   51113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-275828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1207 21:16:34.638255   51113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:16:34.651147   51113 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:34.651264   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:34.660855   51113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1207 21:16:34.678841   51113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:34.696338   51113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1207 21:16:34.718058   51113 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:34.722640   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:34.737097   51113 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828 for IP: 192.168.39.254
	I1207 21:16:34.737138   51113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:34.737316   51113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:34.737367   51113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:34.737459   51113 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.key
	I1207 21:16:34.737557   51113 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key.9e1cae77
	I1207 21:16:34.737614   51113 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key
	I1207 21:16:34.737745   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:34.737783   51113 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:34.737799   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:34.737835   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:34.737870   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:34.737904   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:34.737976   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:34.738542   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:34.768389   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:34.801112   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:31.931027   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:34.430620   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:31.903642   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.903781   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.919330   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.403857   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.403949   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.419078   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.903477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.903561   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.918946   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.403477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.403605   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.416411   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.903561   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.903690   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.915554   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:34.379314   51037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:34.379347   51037 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:34.379361   51037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:34.379450   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:34.427182   51037 cri.go:89] found id: ""
	I1207 21:16:34.427255   51037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:34.448141   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:34.462411   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:34.462494   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474410   51037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474442   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:34.646144   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.548212   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.745964   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.818060   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.899490   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:35.899616   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:35.916336   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:36.432466   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:34.415333   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:34.415908   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:34.415935   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:34.415819   51971 retry.go:31] will retry after 1.846003214s: waiting for machine to come up
	I1207 21:16:36.262900   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:36.263321   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:36.263343   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:36.263283   51971 retry.go:31] will retry after 1.858599877s: waiting for machine to come up
	I1207 21:16:38.124144   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:38.124669   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:38.124701   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:38.124622   51971 retry.go:31] will retry after 2.443451278s: waiting for machine to come up
	I1207 21:16:34.830966   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:35.094040   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:35.121234   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:35.148659   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:35.176938   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:35.206320   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:35.234907   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:35.261034   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:35.286500   51113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:35.306742   51113 ssh_runner.go:195] Run: openssl version
	I1207 21:16:35.314676   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:35.325752   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332066   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332147   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.339606   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:35.350274   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:35.360328   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365516   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365593   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.371482   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:35.381328   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:35.391869   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.396986   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.397051   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.402939   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:35.413428   51113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:35.419598   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:35.427748   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:35.435492   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:35.442272   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:35.450180   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:35.459639   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:35.467615   51113 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:35.467736   51113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:35.467793   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:35.504593   51113 cri.go:89] found id: ""
	I1207 21:16:35.504685   51113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:35.514155   51113 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:35.514182   51113 kubeadm.go:636] restartCluster start
	I1207 21:16:35.514255   51113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:35.525515   51113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.526798   51113 kubeconfig.go:92] found "default-k8s-diff-port-275828" server: "https://192.168.39.254:8444"
	I1207 21:16:35.529447   51113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:35.540876   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.540934   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.555494   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.555519   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.555569   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.569455   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.069801   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.069903   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.083366   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.569984   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.570078   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.585387   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.069869   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.069980   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.086900   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.570490   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.570597   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.586215   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.069601   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.069709   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.084557   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.570194   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.570306   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.586686   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.070433   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.070518   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.088460   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.570579   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.570654   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.588478   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.785543   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:38.932981   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:36.932228   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.432719   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.932863   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.432661   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.932210   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.965380   51037 api_server.go:72] duration metric: took 3.065893789s to wait for apiserver process to appear ...
	I1207 21:16:38.965409   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:38.965425   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:40.571221   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:40.571824   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:40.571873   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:40.571774   51971 retry.go:31] will retry after 2.349695925s: waiting for machine to come up
	I1207 21:16:42.923107   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:42.923582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:42.923618   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:42.923549   51971 retry.go:31] will retry after 4.503894046s: waiting for machine to come up
	I1207 21:16:40.070126   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.070229   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.085086   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:40.570237   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.570329   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.584997   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.069554   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.069706   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.084654   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.570175   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.570260   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.581973   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.070546   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.070641   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.085859   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.570428   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.570534   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.585491   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.070017   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.070132   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.082461   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.569992   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.570093   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.585221   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.069681   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.069749   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.081499   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.569999   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.570083   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.585512   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.598644   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.598675   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:43.598689   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:43.649508   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.649553   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:44.150221   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.155890   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.155914   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:44.649610   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.655402   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.655437   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:45.150082   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:45.156432   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:16:45.172948   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:16:45.172983   51037 api_server.go:131] duration metric: took 6.207566234s to wait for apiserver health ...
	I1207 21:16:45.172996   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:45.173002   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:45.175018   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:41.430106   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:43.431417   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.932644   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.176436   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:45.231836   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:45.250256   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:45.270151   51037 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:45.270188   51037 system_pods.go:61] "coredns-76f75df574-qfwbr" [577161a0-8d68-41cc-88cd-1bd56e99b7aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:45.270198   51037 system_pods.go:61] "etcd-no-preload-950431" [8e49a6a7-c1e5-469d-9b30-c8e59471effb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:45.270210   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [15bc33db-995d-4102-9a2b-e991209c2946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:45.270220   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [c263b58e-2aea-455d-8b2f-8915f1c6e820] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:45.270232   51037 system_pods.go:61] "kube-proxy-mzv22" [96e51e2f-17be-4724-ae28-99dfa63e9976] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:45.270241   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [c040d573-c78f-4149-8be6-af33fc6ea186] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:45.270257   51037 system_pods.go:61] "metrics-server-57f55c9bc5-fv8x4" [ac03a70e-1059-474f-b6f6-5974f0900bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:45.270268   51037 system_pods.go:61] "storage-provisioner" [3f942481-221c-4e69-a876-f82676cde788] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:45.270279   51037 system_pods.go:74] duration metric: took 19.99813ms to wait for pod list to return data ...
	I1207 21:16:45.270291   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:45.274636   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:45.274667   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:45.274681   51037 node_conditions.go:105] duration metric: took 4.381452ms to run NodePressure ...
	I1207 21:16:45.274700   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.597857   51037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603394   51037 kubeadm.go:787] kubelet initialised
	I1207 21:16:45.603423   51037 kubeadm.go:788] duration metric: took 5.535827ms waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603432   51037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:45.612509   51037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:47.430850   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431364   50270 main.go:141] libmachine: (old-k8s-version-483745) Found IP for machine: 192.168.61.171
	I1207 21:16:47.431389   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserving static IP address...
	I1207 21:16:47.431415   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has current primary IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431791   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.431827   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | skip adding static IP to network mk-old-k8s-version-483745 - found existing host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"}
	I1207 21:16:47.431845   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserved static IP address: 192.168.61.171
	I1207 21:16:47.431866   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting for SSH to be available...
	I1207 21:16:47.431884   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Getting to WaitForSSH function...
	I1207 21:16:47.434071   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434391   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.434423   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434511   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH client type: external
	I1207 21:16:47.434548   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa (-rw-------)
	I1207 21:16:47.434590   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:47.434624   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | About to run SSH command:
	I1207 21:16:47.434642   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | exit 0
	I1207 21:16:47.529747   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:47.530150   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetConfigRaw
	I1207 21:16:47.530743   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.533361   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.533690   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.533728   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.534019   50270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:16:47.534201   50270 machine.go:88] provisioning docker machine ...
	I1207 21:16:47.534219   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:47.534379   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534549   50270 buildroot.go:166] provisioning hostname "old-k8s-version-483745"
	I1207 21:16:47.534578   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534793   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.537037   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537448   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.537482   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537621   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.537788   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.537963   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.538107   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.538276   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.538728   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.538751   50270 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-483745 && echo "old-k8s-version-483745" | sudo tee /etc/hostname
	I1207 21:16:47.694514   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-483745
	
	I1207 21:16:47.694552   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.697720   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698181   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.698217   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.698602   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698958   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.699158   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.699617   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.699646   50270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-483745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-483745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-483745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:47.851750   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:47.851781   50270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:47.851817   50270 buildroot.go:174] setting up certificates
	I1207 21:16:47.851830   50270 provision.go:83] configureAuth start
	I1207 21:16:47.851848   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.852181   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.855229   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855607   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.855633   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855891   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.858432   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.858811   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.858868   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.859066   50270 provision.go:138] copyHostCerts
	I1207 21:16:47.859126   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:47.859146   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:47.859211   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:47.859312   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:47.859322   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:47.859352   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:47.859426   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:47.859436   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:47.859465   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:47.859532   50270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-483745 san=[192.168.61.171 192.168.61.171 localhost 127.0.0.1 minikube old-k8s-version-483745]
	I1207 21:16:48.080700   50270 provision.go:172] copyRemoteCerts
	I1207 21:16:48.080764   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:48.080787   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.083799   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084261   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.084325   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084545   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.084752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.084874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.085025   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.188586   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:48.217051   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:16:48.245046   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:48.276344   50270 provision.go:86] duration metric: configureAuth took 424.496766ms
	I1207 21:16:48.276381   50270 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:48.276627   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:16:48.276720   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.280119   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280556   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.280627   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280943   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.281127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281312   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281452   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.281621   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.282136   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.282160   50270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:45.070516   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:45.070618   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:45.087880   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:45.541593   51113 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:45.541627   51113 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:45.541640   51113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:45.541714   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:45.589291   51113 cri.go:89] found id: ""
	I1207 21:16:45.589394   51113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:45.606397   51113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:45.616135   51113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:45.616192   51113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625661   51113 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625689   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.750072   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.619750   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.838835   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.935494   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:47.007474   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:47.007536   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.020817   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.536948   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.036982   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.537584   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.036899   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.537400   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.575582   51113 api_server.go:72] duration metric: took 2.568102787s to wait for apiserver process to appear ...
	I1207 21:16:49.575614   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:49.575636   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576140   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:49.576174   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576630   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:48.639642   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:48.639702   50270 machine.go:91] provisioned docker machine in 1.10547448s
	I1207 21:16:48.639715   50270 start.go:300] post-start starting for "old-k8s-version-483745" (driver="kvm2")
	I1207 21:16:48.639733   50270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:48.639772   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.640106   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:48.640136   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.643155   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643592   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.643625   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643897   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.644101   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.644253   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.644374   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.756527   50270 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:48.761976   50270 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:48.762042   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:48.762117   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:48.762229   50270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:48.762355   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:48.773495   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:48.802433   50270 start.go:303] post-start completed in 162.696963ms
	I1207 21:16:48.802464   50270 fix.go:56] fixHost completed within 20.771337135s
	I1207 21:16:48.802489   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.805389   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.805821   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.805853   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.806002   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.806221   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806361   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806516   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.806737   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.807177   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.807194   50270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:48.948515   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983808.895290650
	
	I1207 21:16:48.948602   50270 fix.go:206] guest clock: 1701983808.895290650
	I1207 21:16:48.948622   50270 fix.go:219] Guest: 2023-12-07 21:16:48.89529065 +0000 UTC Remote: 2023-12-07 21:16:48.802469186 +0000 UTC m=+365.320601213 (delta=92.821464ms)
	I1207 21:16:48.948679   50270 fix.go:190] guest clock delta is within tolerance: 92.821464ms
	I1207 21:16:48.948694   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 20.917606045s
	I1207 21:16:48.948726   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.948967   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:48.952007   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952392   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.952424   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952680   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953302   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953494   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953578   50270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:48.953633   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.953877   50270 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:48.953904   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.957083   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957288   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957631   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957656   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957798   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957849   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958105   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958110   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958284   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958443   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958665   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.958668   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:49.082678   50270 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:49.091075   50270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:49.250638   50270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:49.259237   50270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:49.259312   50270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:49.279490   50270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:49.279520   50270 start.go:475] detecting cgroup driver to use...
	I1207 21:16:49.279592   50270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:49.301129   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:49.317758   50270 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:49.317832   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:49.335384   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:49.352808   50270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:49.487177   50270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:49.622551   50270 docker.go:219] disabling docker service ...
	I1207 21:16:49.622632   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:49.641913   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:49.655046   50270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:49.780471   50270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:49.903816   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:49.917447   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:49.939101   50270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:16:49.939170   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.949112   50270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:49.949187   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.958706   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.968115   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.977516   50270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:49.987974   50270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:49.996996   50270 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:49.997069   50270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:50.009736   50270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:50.018888   50270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:50.136461   50270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:50.337931   50270 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:50.338013   50270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:50.344175   50270 start.go:543] Will wait 60s for crictl version
	I1207 21:16:50.344237   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:50.348418   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:50.387227   50270 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:50.387329   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.439820   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.492743   50270 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1207 21:16:48.431193   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.945823   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:47.635909   51037 pod_ready.go:102] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:49.635091   51037 pod_ready.go:92] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:49.635119   51037 pod_ready.go:81] duration metric: took 4.022584638s waiting for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:49.635139   51037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:51.656178   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.494290   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:50.496890   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497226   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:50.497257   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497557   50270 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:50.501988   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:50.516192   50270 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 21:16:50.516266   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:50.564641   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:50.564723   50270 ssh_runner.go:195] Run: which lz4
	I1207 21:16:50.569306   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:50.573458   50270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:50.573483   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1207 21:16:52.405191   50270 crio.go:444] Took 1.835925 seconds to copy over tarball
	I1207 21:16:52.405260   50270 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:50.077304   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.602961   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.602994   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:54.603007   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.660014   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.660053   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:55.077712   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.102038   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.102068   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:55.577664   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.586714   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.586753   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:56.077361   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:56.084665   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:16:56.096164   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:56.096196   51113 api_server.go:131] duration metric: took 6.520574302s to wait for apiserver health ...
	I1207 21:16:56.096209   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:56.096219   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:53.431611   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.954091   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:53.656773   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.659213   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:56.811148   51113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:55.499497   50270 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094207903s)
	I1207 21:16:55.499524   50270 crio.go:451] Took 3.094311 seconds to extract the tarball
	I1207 21:16:55.499532   50270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:55.539952   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:55.612029   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:55.612059   50270 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:55.612164   50270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:16:55.612282   50270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.612335   50270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.612433   50270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.612564   50270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.612575   50270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614472   50270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.614507   50270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.614513   50270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.614571   50270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.744531   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1207 21:16:55.744539   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.747157   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.748014   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.754498   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.778012   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.781417   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.886272   50270 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1207 21:16:55.886318   50270 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1207 21:16:55.886371   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.949015   50270 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1207 21:16:55.949128   50270 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.949205   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.963217   50270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1207 21:16:55.963332   50270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.963422   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.966733   50270 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1207 21:16:55.966854   50270 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.966934   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.004614   50270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1207 21:16:56.004668   50270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.004721   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.015557   50270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1207 21:16:56.015655   50270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.015714   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017603   50270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1207 21:16:56.017643   50270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.017686   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017817   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1207 21:16:56.017913   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:56.018011   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:56.018087   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1207 21:16:56.018160   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.028183   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.030370   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.222552   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:16:56.222625   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1207 21:16:56.222673   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1207 21:16:56.222680   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.222731   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1207 21:16:56.222828   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1207 21:16:56.222911   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1207 21:16:56.236367   50270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1207 21:16:56.236387   50270 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236440   50270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236444   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1207 21:16:56.455526   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:58.094353   50270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.638791166s)
	I1207 21:16:58.094525   50270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.858047565s)
	I1207 21:16:58.094552   50270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1207 21:16:58.094591   50270 cache_images.go:92] LoadImages completed in 2.482516651s
	W1207 21:16:58.094650   50270 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1207 21:16:58.094729   50270 ssh_runner.go:195] Run: crio config
	I1207 21:16:58.191059   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:16:58.191083   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:58.191108   50270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:58.191132   50270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.171 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-483745 NodeName:old-k8s-version-483745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 21:16:58.191279   50270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-483745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-483745
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.171:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:58.191389   50270 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-483745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:58.191462   50270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1207 21:16:58.204882   50270 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:58.204948   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:58.217370   50270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1207 21:16:58.237205   50270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:58.256539   50270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1207 21:16:58.276428   50270 ssh_runner.go:195] Run: grep 192.168.61.171	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:58.281568   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:58.295073   50270 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745 for IP: 192.168.61.171
	I1207 21:16:58.295112   50270 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:58.295295   50270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:58.295368   50270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:58.295493   50270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.key
	I1207 21:16:58.295589   50270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key.13a54c20
	I1207 21:16:58.295658   50270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key
	I1207 21:16:58.295817   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:58.295861   50270 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:58.295887   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:58.295922   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:58.295972   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:58.296012   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:58.296067   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:58.296936   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:58.327708   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:58.354646   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:58.379025   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:16:58.404362   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:58.433648   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:58.459739   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:58.487457   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:58.516507   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:57.214999   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:57.244196   51113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:57.264778   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:57.978177   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:57.978214   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:57.978224   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:57.978232   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:57.978241   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:57.978248   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:57.978254   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:57.978261   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:57.978267   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:57.978276   51113 system_pods.go:74] duration metric: took 713.475246ms to wait for pod list to return data ...
	I1207 21:16:57.978285   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:57.983354   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:57.983379   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:57.983389   51113 node_conditions.go:105] duration metric: took 5.099916ms to run NodePressure ...
	I1207 21:16:57.983403   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:58.583287   51113 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590472   51113 kubeadm.go:787] kubelet initialised
	I1207 21:16:58.590500   51113 kubeadm.go:788] duration metric: took 7.176115ms waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590509   51113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:58.597622   51113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.609459   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609491   51113 pod_ready.go:81] duration metric: took 11.841558ms waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.609503   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609513   51113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.620143   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620172   51113 pod_ready.go:81] duration metric: took 10.647465ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.620185   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620193   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.633821   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633850   51113 pod_ready.go:81] duration metric: took 13.645914ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.633864   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633872   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.647333   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647359   51113 pod_ready.go:81] duration metric: took 13.477348ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.647373   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647385   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.988420   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988448   51113 pod_ready.go:81] duration metric: took 341.054838ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.988457   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988465   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.388053   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388080   51113 pod_ready.go:81] duration metric: took 399.605098ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.388090   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388097   51113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.787887   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787913   51113 pod_ready.go:81] duration metric: took 399.809388ms waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.787925   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787932   51113 pod_ready.go:38] duration metric: took 1.197413161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:59.787945   51113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:16:59.801806   51113 ops.go:34] apiserver oom_adj: -16
	I1207 21:16:59.801828   51113 kubeadm.go:640] restartCluster took 24.28763849s
	I1207 21:16:59.801837   51113 kubeadm.go:406] StartCluster complete in 24.334230687s
	I1207 21:16:59.801855   51113 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.801945   51113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:16:59.804179   51113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.804458   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:16:59.804515   51113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:16:59.804612   51113 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804638   51113 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.804646   51113 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:16:59.804695   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:59.804714   51113 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804727   51113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-275828"
	I1207 21:16:59.804704   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805119   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805150   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805168   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805180   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805204   51113 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.805226   51113 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.805235   51113 addons.go:240] addon metrics-server should already be in state true
	I1207 21:16:59.805277   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805627   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805663   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.811657   51113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-275828" context rescaled to 1 replicas
	I1207 21:16:59.811696   51113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:16:59.814005   51113 out.go:177] * Verifying Kubernetes components...
	I1207 21:16:59.815636   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:16:59.822134   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1207 21:16:59.822558   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.822636   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I1207 21:16:59.822718   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1207 21:16:59.823063   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823104   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823126   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823128   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823479   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823605   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823619   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823943   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823970   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.824050   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824102   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.824193   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.824463   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824502   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.828241   51113 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.828264   51113 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:16:59.828292   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.828676   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.830577   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.841996   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1207 21:16:59.842283   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I1207 21:16:59.842697   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.842888   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.843254   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843277   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843391   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843416   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843638   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843779   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843831   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.843973   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.845644   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.845852   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.847586   51113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:59.847253   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1207 21:16:59.849062   51113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:16:57.998272   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:00.429603   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.850487   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:16:59.850500   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:16:59.850514   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849121   51113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:16:59.850564   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:16:59.850583   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849452   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.851054   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.851071   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.851664   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.852274   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.852315   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.854738   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855190   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.855204   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855394   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.855556   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.855649   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.855724   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.856210   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.856596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856720   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.856846   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.857188   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.857324   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.871856   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1207 21:16:59.872193   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.872726   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.872744   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.873088   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.873243   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.874542   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.874803   51113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:16:59.874821   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:16:59.874840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.877142   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877524   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.877547   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877753   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.877889   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.878024   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.878137   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.983279   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:17:00.040397   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:17:00.056981   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:17:00.057008   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:17:00.078195   51113 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 21:17:00.078235   51113 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:00.117369   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:17:00.117399   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:17:00.177756   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:00.177783   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:17:00.220667   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:01.338599   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.298167461s)
	I1207 21:17:01.338648   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338662   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338747   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.355434262s)
	I1207 21:17:01.338789   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338802   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338925   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.338945   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.338960   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338969   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340360   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340381   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340357   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340368   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340472   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340490   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.340504   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340785   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340788   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340804   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347722   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.347741   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.347933   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.347950   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434021   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213311264s)
	I1207 21:17:01.434084   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434099   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434391   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434413   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434410   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434423   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434434   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434627   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434637   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434648   51113 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-275828"
	I1207 21:17:01.436476   51113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:16:57.997177   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.154238   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.154261   51037 pod_ready.go:81] duration metric: took 9.519115953s waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.154270   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159402   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.159421   51037 pod_ready.go:81] duration metric: took 5.143876ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159431   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164107   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.164124   51037 pod_ready.go:81] duration metric: took 4.684573ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164134   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168711   51037 pod_ready.go:92] pod "kube-proxy-mzv22" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.168727   51037 pod_ready.go:81] duration metric: took 4.587318ms waiting for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168734   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201648   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.201676   51037 pod_ready.go:81] duration metric: took 32.935891ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201688   51037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:01.509707   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:58.544765   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:58.571376   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:58.597700   50270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:58.616720   50270 ssh_runner.go:195] Run: openssl version
	I1207 21:16:58.622830   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:58.634656   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640469   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640526   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.646624   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:58.660113   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:58.670742   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675735   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675782   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.682821   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:58.696760   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:58.710547   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.716983   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.717048   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.724400   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:58.736496   50270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:58.742587   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:58.750398   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:58.757537   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:58.764361   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:58.771280   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:58.778697   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:58.785873   50270 kubeadm.go:404] StartCluster: {Name:old-k8s-version-483745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:58.786022   50270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:58.786079   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:58.834174   50270 cri.go:89] found id: ""
	I1207 21:16:58.834262   50270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:58.845932   50270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:58.845958   50270 kubeadm.go:636] restartCluster start
	I1207 21:16:58.846025   50270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:58.855982   50270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.857458   50270 kubeconfig.go:92] found "old-k8s-version-483745" server: "https://192.168.61.171:8443"
	I1207 21:16:58.860840   50270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:58.870183   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.870235   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.881631   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.881647   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.881693   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.892422   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.393094   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.393163   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.405578   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.893104   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.893160   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.906998   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.393560   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.393629   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.405837   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.893376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.893472   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.905785   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.393118   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.393204   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.405693   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.893214   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.893348   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.906272   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.392588   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.392682   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.404717   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.893325   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.893425   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.906705   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:03.392549   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.392627   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.406493   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.437892   51113 addons.go:502] enable addons completed in 1.633389199s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:17:02.198851   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:04.199518   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:02.931262   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.431344   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.509733   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.511779   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.892711   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.892814   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.905553   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.393144   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.393236   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.406280   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.893375   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.893459   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.905715   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.393376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.393473   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.405757   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.892719   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.892800   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.906258   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.392706   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.392787   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.405913   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.893392   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.893475   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.908660   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.392944   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.393037   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.408113   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.892488   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.892602   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.905157   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:08.393126   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:08.393209   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:08.405227   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.197790   51113 node_ready.go:49] node "default-k8s-diff-port-275828" has status "Ready":"True"
	I1207 21:17:05.197814   51113 node_ready.go:38] duration metric: took 5.119553512s waiting for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:05.197825   51113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:17:05.204644   51113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:07.225887   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.229380   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:07.928733   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.929797   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.009114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:10.012079   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.870396   50270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:17:08.870427   50270 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:17:08.870439   50270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:17:08.870496   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:17:08.914337   50270 cri.go:89] found id: ""
	I1207 21:17:08.914412   50270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:17:08.932406   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:17:08.941877   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:17:08.942012   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952016   50270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952038   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.086175   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.811331   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.044161   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.117851   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.218309   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:17:10.218376   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.231007   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.754756   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.255150   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.755138   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.782482   50270 api_server.go:72] duration metric: took 1.564169408s to wait for apiserver process to appear ...
	I1207 21:17:11.782510   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:17:11.782543   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:11.729870   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.727588   51113 pod_ready.go:92] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.727621   51113 pod_ready.go:81] duration metric: took 7.52294973s waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.727635   51113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733893   51113 pod_ready.go:92] pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.733936   51113 pod_ready.go:81] duration metric: took 6.276731ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733951   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739431   51113 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.739456   51113 pod_ready.go:81] duration metric: took 5.495838ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739467   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745435   51113 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.745456   51113 pod_ready.go:81] duration metric: took 5.98053ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745468   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751301   51113 pod_ready.go:92] pod "kube-proxy-nmx2z" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.751323   51113 pod_ready.go:81] duration metric: took 5.845741ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751333   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122896   51113 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:13.122923   51113 pod_ready.go:81] duration metric: took 371.582675ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122936   51113 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:11.931676   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.433505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.510180   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.511615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.519216   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.783319   50270 api_server.go:269] stopped: https://192.168.61.171:8443/healthz: Get "https://192.168.61.171:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 21:17:16.783432   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.468175   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:17:17.468210   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:17:17.968919   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.975181   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:17.975206   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.469287   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.476311   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:18.476340   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.968605   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.974285   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:17:18.981956   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:17:18.981983   50270 api_server.go:131] duration metric: took 7.199466057s to wait for apiserver health ...
	I1207 21:17:18.981994   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:17:18.982000   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:17:18.983962   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:17:15.433488   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:17.434321   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.931755   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.430606   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.010615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.512114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:18.985481   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:17:18.994841   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:17:19.015418   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:17:19.029654   50270 system_pods.go:59] 7 kube-system pods found
	I1207 21:17:19.029685   50270 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:17:19.029692   50270 system_pods.go:61] "etcd-old-k8s-version-483745" [4a920248-1b35-4834-9e6f-a0e7567b5bb8] Running
	I1207 21:17:19.029699   50270 system_pods.go:61] "kube-apiserver-old-k8s-version-483745" [aaba6fb9-56a1-497d-a398-5c685f5500dd] Running
	I1207 21:17:19.029706   50270 system_pods.go:61] "kube-controller-manager-old-k8s-version-483745" [a13bda00-a0f4-4f59-8b52-65589579efcf] Running
	I1207 21:17:19.029711   50270 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:17:19.029715   50270 system_pods.go:61] "kube-scheduler-old-k8s-version-483745" [4fc3e12a-e294-457e-912f-0ed765ad4def] Running
	I1207 21:17:19.029718   50270 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:17:19.029726   50270 system_pods.go:74] duration metric: took 14.290629ms to wait for pod list to return data ...
	I1207 21:17:19.029739   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:17:19.033868   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:17:19.033897   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:17:19.033911   50270 node_conditions.go:105] duration metric: took 4.166175ms to run NodePressure ...
	I1207 21:17:19.033945   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:19.284413   50270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:17:19.288373   50270 retry.go:31] will retry after 182.556746ms: kubelet not initialised
	I1207 21:17:19.479987   50270 retry.go:31] will retry after 253.110045ms: kubelet not initialised
	I1207 21:17:19.744586   50270 retry.go:31] will retry after 608.133785ms: kubelet not initialised
	I1207 21:17:20.357758   50270 retry.go:31] will retry after 829.182382ms: kubelet not initialised
	I1207 21:17:21.192621   50270 retry.go:31] will retry after 998.365497ms: kubelet not initialised
	I1207 21:17:22.196882   50270 retry.go:31] will retry after 1.144379185s: kubelet not initialised
	I1207 21:17:23.346660   50270 retry.go:31] will retry after 4.175853771s: kubelet not initialised
	I1207 21:17:19.937119   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:22.433221   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.430858   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:23.929526   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:25.932244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:24.011486   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.509908   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.529200   50270 retry.go:31] will retry after 6.099259697s: kubelet not initialised
	I1207 21:17:24.932035   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.932432   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:28.935455   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.933244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:30.431008   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:29.009917   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.509259   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.432441   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.933226   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:32.431713   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:34.931903   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.510686   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:35.511611   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.635018   50270 retry.go:31] will retry after 3.426713545s: kubelet not initialised
	I1207 21:17:37.067021   50270 retry.go:31] will retry after 7.020738309s: kubelet not initialised
	I1207 21:17:35.933872   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.432200   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:37.432208   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:39.432443   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.008964   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.013143   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.434554   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.935808   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:41.931614   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.431445   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.510798   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:45.010221   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.093245   50270 retry.go:31] will retry after 15.092242293s: kubelet not initialised
	I1207 21:17:45.433353   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.933249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:46.931078   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.430564   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.510355   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:50.010022   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.935001   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.433167   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:51.430664   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:53.431310   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.431508   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.010127   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:54.937299   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.432126   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.929516   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.929800   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.511723   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:00.010732   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.190582   50270 retry.go:31] will retry after 18.708242221s: kubelet not initialised
	I1207 21:17:59.932898   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.435773   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.429487   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.931336   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.011470   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.508873   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:06.510378   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.932311   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.434111   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.431033   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.931058   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.009614   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.009942   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.932527   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.933100   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.432890   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:12.429420   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.431778   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:13.010085   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:15.509812   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:17.907480   50270 kubeadm.go:787] kubelet initialised
	I1207 21:18:17.907516   50270 kubeadm.go:788] duration metric: took 58.6230723s waiting for restarted kubelet to initialise ...
	I1207 21:18:17.907523   50270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:18:17.912349   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917692   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.917710   50270 pod_ready.go:81] duration metric: took 5.339125ms waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917718   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923173   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.923192   50270 pod_ready.go:81] duration metric: took 5.469466ms waiting for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923200   50270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928824   50270 pod_ready.go:92] pod "etcd-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.928846   50270 pod_ready.go:81] duration metric: took 5.638159ms waiting for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928856   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.934993   50270 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.935014   50270 pod_ready.go:81] duration metric: took 6.149728ms waiting for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.935025   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311907   50270 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.311934   50270 pod_ready.go:81] duration metric: took 376.900024ms waiting for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311947   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:16.931768   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:16.930954   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932194   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.009341   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.010383   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.709795   50270 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.709818   50270 pod_ready.go:81] duration metric: took 397.865434ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.709828   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107018   50270 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:19.107046   50270 pod_ready.go:81] duration metric: took 397.21085ms waiting for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107074   50270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:21.413113   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.414993   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.937780   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.432192   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:21.429764   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.430826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.930929   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:22.510894   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.009872   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.914333   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.914486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.432249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.432529   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.930973   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.430718   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.510016   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.009983   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.415400   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.912237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:29.932694   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.433150   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.432680   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.931118   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.010572   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.508896   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.509628   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.913374   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.914250   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.933409   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.432655   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.432740   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.430165   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.930630   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.009629   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.009658   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:38.914325   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:40.915158   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.413980   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.932574   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.432525   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:42.431330   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.929635   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.009978   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.010954   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.414082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.415225   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:46.932342   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:48.932460   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.429890   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.931948   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.508820   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.508885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.510909   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.916969   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.414590   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.431888   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:53.432497   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.429836   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.429987   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.010442   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.520121   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.415187   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.914505   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:55.433372   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:57.437496   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.932937   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.430774   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.010885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.510473   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.413820   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.413911   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.414163   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.932159   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.932344   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:04.432873   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.430926   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.930199   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.930253   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.511496   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.512541   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.913832   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.915554   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:06.433629   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.933148   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.931760   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.431655   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.009852   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.010279   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.415114   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.913846   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:11.433166   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:13.933572   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.930147   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.935480   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.010617   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.510815   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:15.414959   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.913372   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:16.433375   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:18.932915   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.436017   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.933613   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.008855   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.010583   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.510650   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.913760   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.913931   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.434113   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.932185   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:22.429942   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.432486   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.009731   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.513595   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.913964   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:25.915033   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.415173   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.433721   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.932763   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.934197   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.432795   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.008998   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.011163   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:30.912991   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:32.914672   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.432802   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.932626   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.930505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.510138   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.010166   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:34.915019   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:37.414169   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:35.933595   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.432419   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.433061   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.929697   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.930753   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.509265   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.509898   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:39.414719   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:41.914208   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.932356   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.932643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.430519   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.930095   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.510763   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:44.511006   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.914874   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:46.414739   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.431904   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.930507   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.930634   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.009537   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.009825   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.010633   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:48.914101   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.413288   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:50.433022   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:52.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.930920   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:54.433488   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.508693   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.509440   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.913446   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.914532   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.416064   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.432116   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:57.935271   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:56.929900   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.931501   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.009318   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.510190   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.915025   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.414806   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.432326   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:02.432758   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:04.434643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:01.431826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.931648   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.010188   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.010498   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.914269   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.914640   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:06.931909   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.431136   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.932438   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.509186   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:09.511791   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.415605   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.918130   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.934599   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.434477   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.430502   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.434943   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.008903   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:14.010390   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:16.509062   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.415237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.914465   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.435338   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.933559   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.931293   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:18.408309   50624 pod_ready.go:81] duration metric: took 4m0.000858815s waiting for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:18.408355   50624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:18.408376   50624 pod_ready.go:38] duration metric: took 4m11.111070516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:18.408405   50624 kubeadm.go:640] restartCluster took 4m30.625453328s
	W1207 21:20:18.408479   50624 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:18.408513   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:18.510036   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:20.510485   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.915160   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:21.915544   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.940064   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:22.432481   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:24.432791   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.010158   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:25.509777   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.915685   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.414017   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.415525   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.435601   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.932153   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.009824   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.509369   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.372266   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.96372485s)
	I1207 21:20:32.372349   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:32.386002   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:20:32.395757   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:20:32.406709   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:20:32.406761   50624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:20:32.465707   50624 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:20:32.465842   50624 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:20:32.636031   50624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:20:32.636171   50624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:20:32.636296   50624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:20:32.892368   50624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:20:32.894341   50624 out.go:204]   - Generating certificates and keys ...
	I1207 21:20:32.894484   50624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:20:32.894581   50624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:20:32.894717   50624 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:20:32.894799   50624 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:20:32.895289   50624 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:20:32.895583   50624 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:20:32.896112   50624 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:20:32.896577   50624 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:20:32.897032   50624 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:20:32.897567   50624 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:20:32.897804   50624 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:20:32.897886   50624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:20:32.942322   50624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:20:33.084899   50624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:20:33.286309   50624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:20:33.482188   50624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:20:33.483077   50624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:20:33.487928   50624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:20:30.912937   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.914703   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.934926   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.431849   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.489853   50624 out.go:204]   - Booting up control plane ...
	I1207 21:20:33.490021   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:20:33.490177   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:20:33.490458   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:20:33.509319   50624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:20:33.509448   50624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:20:33.509501   50624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:20:33.654452   50624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:20:32.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.510930   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.918486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.414467   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:35.432767   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.931132   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.009506   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.011200   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.509897   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.657033   50624 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003082 seconds
	I1207 21:20:41.657193   50624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:20:41.673142   50624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:20:42.218438   50624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:20:42.218706   50624 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-598346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:20:42.745090   50624 kubeadm.go:322] [bootstrap-token] Using token: 74zooz.4uhmxlwojs4pjw69
	I1207 21:20:42.746934   50624 out.go:204]   - Configuring RBAC rules ...
	I1207 21:20:42.747111   50624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:20:42.762521   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:20:42.776210   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:20:42.781152   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:20:42.786698   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:20:42.795815   50624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:20:42.811407   50624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:20:43.073430   50624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:20:43.167611   50624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:20:43.168880   50624 kubeadm.go:322] 
	I1207 21:20:43.168970   50624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:20:43.169014   50624 kubeadm.go:322] 
	I1207 21:20:43.169111   50624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:20:43.169132   50624 kubeadm.go:322] 
	I1207 21:20:43.169163   50624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:20:43.169239   50624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:20:43.169314   50624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:20:43.169322   50624 kubeadm.go:322] 
	I1207 21:20:43.169394   50624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:20:43.169402   50624 kubeadm.go:322] 
	I1207 21:20:43.169475   50624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:20:43.169500   50624 kubeadm.go:322] 
	I1207 21:20:43.169591   50624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:20:43.169701   50624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:20:43.169799   50624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:20:43.169811   50624 kubeadm.go:322] 
	I1207 21:20:43.169930   50624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:20:43.170066   50624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:20:43.170078   50624 kubeadm.go:322] 
	I1207 21:20:43.170177   50624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170303   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:20:43.170332   50624 kubeadm.go:322] 	--control-plane 
	I1207 21:20:43.170338   50624 kubeadm.go:322] 
	I1207 21:20:43.170463   50624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:20:43.170474   50624 kubeadm.go:322] 
	I1207 21:20:43.170590   50624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170717   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:20:43.171438   50624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:20:43.171461   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:20:43.171467   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:20:43.173556   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:20:39.415520   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.416257   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.933233   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.933860   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:44.432482   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.175267   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:20:43.199404   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:20:43.237091   50624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:20:43.237150   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.237203   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=embed-certs-598346 minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.303369   50624 ops.go:34] apiserver oom_adj: -16
	I1207 21:20:43.670500   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.788364   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.394973   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.894494   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.394695   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.895141   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.509949   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.511007   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.915384   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.916082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:47.916757   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.432649   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:48.434738   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.394706   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:46.894743   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.395117   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.894780   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.395408   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.895349   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.394860   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.894472   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.395102   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.895157   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.512284   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.011848   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.413787   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.913793   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.933240   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.935428   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:51.394691   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:51.895193   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.395131   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.894787   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.394652   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.895139   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.395160   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.895153   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.394410   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.584599   50624 kubeadm.go:1088] duration metric: took 12.347498848s to wait for elevateKubeSystemPrivileges.
	I1207 21:20:55.584628   50624 kubeadm.go:406] StartCluster complete in 5m7.857234007s
	I1207 21:20:55.584645   50624 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.584733   50624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:20:55.587311   50624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.587607   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:20:55.587630   50624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:20:55.587708   50624 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-598346"
	I1207 21:20:55.587716   50624 addons.go:69] Setting default-storageclass=true in profile "embed-certs-598346"
	I1207 21:20:55.587728   50624 addons.go:69] Setting metrics-server=true in profile "embed-certs-598346"
	I1207 21:20:55.587739   50624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-598346"
	I1207 21:20:55.587760   50624 addons.go:231] Setting addon metrics-server=true in "embed-certs-598346"
	W1207 21:20:55.587769   50624 addons.go:240] addon metrics-server should already be in state true
	I1207 21:20:55.587826   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587736   50624 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-598346"
	W1207 21:20:55.587852   50624 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:20:55.587901   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587824   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:20:55.588192   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588202   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588223   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588224   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588284   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588308   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.605717   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I1207 21:20:55.605750   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1207 21:20:55.605726   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1207 21:20:55.606254   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606305   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606338   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606778   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606803   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606823   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606844   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606826   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606904   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.607178   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607218   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607274   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607420   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.607776   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607816   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.607818   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607849   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.610610   50624 addons.go:231] Setting addon default-storageclass=true in "embed-certs-598346"
	W1207 21:20:55.610628   50624 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:20:55.610647   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.610902   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.610927   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.624530   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I1207 21:20:55.624997   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.625474   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.625492   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.625833   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.626016   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.626236   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1207 21:20:55.626715   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627093   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1207 21:20:55.627538   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627700   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.627709   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628044   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.628061   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628109   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.628112   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.629910   50624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:20:55.628721   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.628756   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.631270   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.631338   50624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.631357   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:20:55.631371   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.631724   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.634618   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.636632   50624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:20:55.635162   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.635740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.638311   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:20:55.638331   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:20:55.638354   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.638318   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.638427   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.638930   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.639110   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.639264   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.642987   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643401   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.643432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643605   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.643794   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.643947   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.644065   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.649214   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I1207 21:20:55.649604   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.650085   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.650106   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.650583   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.650740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.657356   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.657691   50624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.657708   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:20:55.657727   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.659345   50624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-598346" context rescaled to 1 replicas
	I1207 21:20:55.659381   50624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:20:55.660949   50624 out.go:177] * Verifying Kubernetes components...
	I1207 21:20:55.662172   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:55.661748   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662288   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.662323   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662617   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.662821   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.662992   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.663175   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.825166   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.850131   50624 node_ready.go:35] waiting up to 6m0s for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.850203   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:20:55.850365   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:20:55.850378   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:20:55.879031   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.896010   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:20:55.896034   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:20:55.910575   50624 node_ready.go:49] node "embed-certs-598346" has status "Ready":"True"
	I1207 21:20:55.910603   50624 node_ready.go:38] duration metric: took 60.438039ms waiting for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.910615   50624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:55.976847   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:55.976874   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:20:55.981345   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:56.068591   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:52.509374   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:55.012033   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:54.915300   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.414020   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.761169   50624 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761195   50624 pod_ready.go:81] duration metric: took 1.779826027s waiting for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:57.761205   50624 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761212   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.813172   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962919124s)
	I1207 21:20:58.813238   50624 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:20:58.813195   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.934130104s)
	I1207 21:20:58.813281   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813299   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813520   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.988311627s)
	I1207 21:20:58.813560   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813572   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813757   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.813776   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.813787   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813796   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813831   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814093   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814097   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814110   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814132   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.814152   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.814511   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814531   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.839304   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.839329   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.839611   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.839653   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.839663   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.859922   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.791233211s)
	I1207 21:20:58.859979   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.859998   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860412   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860469   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860483   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.860495   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860430   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.860749   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860768   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860778   50624 addons.go:467] Verifying addon metrics-server=true in "embed-certs-598346"
	I1207 21:20:58.863874   50624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:20:55.431955   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.434174   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:58.865423   50624 addons.go:502] enable addons completed in 3.277791662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:20:58.894841   50624 pod_ready.go:92] pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.894877   50624 pod_ready.go:81] duration metric: took 1.133651819s waiting for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.894891   50624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.906981   50624 pod_ready.go:92] pod "etcd-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.907009   50624 pod_ready.go:81] duration metric: took 12.109561ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.907020   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918176   50624 pod_ready.go:92] pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.918198   50624 pod_ready.go:81] duration metric: took 11.169952ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918211   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928763   50624 pod_ready.go:92] pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.928791   50624 pod_ready.go:81] duration metric: took 10.570922ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928804   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163618   50624 pod_ready.go:92] pod "kube-proxy-h4pmv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.163652   50624 pod_ready.go:81] duration metric: took 1.234839709s waiting for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163664   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455887   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.455909   50624 pod_ready.go:81] duration metric: took 292.236645ms waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455917   50624 pod_ready.go:38] duration metric: took 4.545291617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:00.455932   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:00.455974   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:00.474126   50624 api_server.go:72] duration metric: took 4.814712718s to wait for apiserver process to appear ...
	I1207 21:21:00.474151   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:00.474170   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:21:00.480909   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:21:00.482468   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:00.482491   50624 api_server.go:131] duration metric: took 8.332499ms to wait for apiserver health ...
	I1207 21:21:00.482500   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:00.658932   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:00.658965   50624 system_pods.go:61] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:00.658973   50624 system_pods.go:61] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:00.658980   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:00.658986   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:00.658992   50624 system_pods.go:61] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:00.658999   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:00.659010   50624 system_pods.go:61] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:00.659018   50624 system_pods.go:61] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:00.659036   50624 system_pods.go:74] duration metric: took 176.530206ms to wait for pod list to return data ...
	I1207 21:21:00.659049   50624 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:00.853965   50624 default_sa.go:45] found service account: "default"
	I1207 21:21:00.853997   50624 default_sa.go:55] duration metric: took 194.939162ms for default service account to be created ...
	I1207 21:21:00.854008   50624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:01.058565   50624 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:01.058594   50624 system_pods.go:89] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:01.058600   50624 system_pods.go:89] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:01.058604   50624 system_pods.go:89] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:01.058609   50624 system_pods.go:89] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:01.058613   50624 system_pods.go:89] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:01.058617   50624 system_pods.go:89] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:01.058634   50624 system_pods.go:89] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:01.058640   50624 system_pods.go:89] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:01.058651   50624 system_pods.go:126] duration metric: took 204.636417ms to wait for k8s-apps to be running ...
	I1207 21:21:01.058664   50624 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:01.058707   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:01.081694   50624 system_svc.go:56] duration metric: took 23.018184ms WaitForService to wait for kubelet.
	I1207 21:21:01.081719   50624 kubeadm.go:581] duration metric: took 5.422310896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:01.081736   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:01.254804   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:01.254838   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:01.254851   50624 node_conditions.go:105] duration metric: took 173.110501ms to run NodePressure ...
	I1207 21:21:01.254866   50624 start.go:228] waiting for startup goroutines ...
	I1207 21:21:01.254875   50624 start.go:233] waiting for cluster config update ...
	I1207 21:21:01.254888   50624 start.go:242] writing updated cluster config ...
	I1207 21:21:01.255260   50624 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:01.312696   50624 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:01.314740   50624 out.go:177] * Done! kubectl is now configured to use "embed-certs-598346" cluster and "default" namespace by default
	I1207 21:20:57.510167   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.202324   51037 pod_ready.go:81] duration metric: took 4m0.000618876s waiting for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:59.202361   51037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:59.202386   51037 pod_ready.go:38] duration metric: took 4m13.59894194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:59.202417   51037 kubeadm.go:640] restartCluster took 4m34.848470509s
	W1207 21:20:59.202490   51037 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:59.202525   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:59.416072   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.416132   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.932924   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.933678   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:04.432068   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:03.914100   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.414149   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.432277   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.432456   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.914660   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:10.927167   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.414941   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.233635   51037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.031083103s)
	I1207 21:21:13.233717   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:13.246941   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:21:13.256697   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:21:13.265143   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:21:13.265188   51037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:21:13.323766   51037 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:21:13.323875   51037 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:21:13.477749   51037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:21:13.477938   51037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:21:13.478083   51037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:21:13.750607   51037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:21:13.752541   51037 out.go:204]   - Generating certificates and keys ...
	I1207 21:21:13.752655   51037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:21:13.752735   51037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:21:13.752887   51037 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:21:13.753031   51037 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:21:13.753250   51037 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:21:13.753432   51037 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:21:13.753647   51037 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:21:13.753850   51037 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:21:13.754167   51037 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:21:13.755114   51037 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:21:13.755889   51037 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:21:13.756020   51037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:21:13.859938   51037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:21:14.193613   51037 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:21:14.239766   51037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:21:14.448306   51037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:21:14.537558   51037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:21:14.538242   51037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:21:14.542910   51037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:21:10.432632   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:12.932769   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.123869   51113 pod_ready.go:81] duration metric: took 4m0.000917841s waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:21:13.123898   51113 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:21:13.123907   51113 pod_ready.go:38] duration metric: took 4m7.926070649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:13.123923   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:13.123951   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:13.124010   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:13.197887   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.197918   51113 cri.go:89] found id: ""
	I1207 21:21:13.197947   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:13.198016   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.203887   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:13.203953   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:13.250727   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:13.250754   51113 cri.go:89] found id: ""
	I1207 21:21:13.250766   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:13.250823   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.255837   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:13.255881   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:13.297690   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:13.297719   51113 cri.go:89] found id: ""
	I1207 21:21:13.297729   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:13.297786   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.303238   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:13.303301   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:13.349838   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:13.349879   51113 cri.go:89] found id: ""
	I1207 21:21:13.349890   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:13.349960   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.354368   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:13.354423   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:13.394201   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.394230   51113 cri.go:89] found id: ""
	I1207 21:21:13.394240   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:13.394298   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.398418   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:13.398489   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:13.443027   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.443055   51113 cri.go:89] found id: ""
	I1207 21:21:13.443065   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:13.443129   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.447530   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:13.447601   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:13.491670   51113 cri.go:89] found id: ""
	I1207 21:21:13.491712   51113 logs.go:284] 0 containers: []
	W1207 21:21:13.491720   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:13.491735   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:13.491795   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:13.541386   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.541414   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:13.541421   51113 cri.go:89] found id: ""
	I1207 21:21:13.541430   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:13.541491   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.546270   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.551524   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:13.551549   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:13.630073   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:13.630119   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.680287   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:13.680318   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.733406   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:13.733442   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:13.751810   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:13.751845   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:13.905859   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:13.905889   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.950595   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:13.950626   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.993833   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:13.993862   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:14.488205   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:14.488242   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:14.531169   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:14.531201   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:14.588229   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:14.588268   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:14.642280   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:14.642310   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:14.693027   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:14.693062   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:14.544787   51037 out.go:204]   - Booting up control plane ...
	I1207 21:21:14.544925   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:21:14.545032   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:21:14.545988   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:21:14.565092   51037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:21:14.566289   51037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:21:14.566356   51037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:21:14.723698   51037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:21:15.913198   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.914942   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.234321   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:17.253156   51113 api_server.go:72] duration metric: took 4m17.441427611s to wait for apiserver process to appear ...
	I1207 21:21:17.253187   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:17.253223   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:17.253330   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:17.301526   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.301557   51113 cri.go:89] found id: ""
	I1207 21:21:17.301573   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:17.301631   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.306049   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:17.306124   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:17.359167   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:17.359195   51113 cri.go:89] found id: ""
	I1207 21:21:17.359205   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:17.359264   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.363853   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:17.363919   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:17.403245   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:17.403271   51113 cri.go:89] found id: ""
	I1207 21:21:17.403281   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:17.403345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.407694   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:17.407771   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:17.462260   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.462287   51113 cri.go:89] found id: ""
	I1207 21:21:17.462298   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:17.462355   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.467157   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:17.467214   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:17.502206   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:17.502236   51113 cri.go:89] found id: ""
	I1207 21:21:17.502246   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:17.502301   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.507601   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:17.507672   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:17.550248   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:17.550275   51113 cri.go:89] found id: ""
	I1207 21:21:17.550284   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:17.550345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.554817   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:17.554879   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:17.595234   51113 cri.go:89] found id: ""
	I1207 21:21:17.595262   51113 logs.go:284] 0 containers: []
	W1207 21:21:17.595272   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:17.595280   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:17.595331   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:17.657464   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.657491   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.657501   51113 cri.go:89] found id: ""
	I1207 21:21:17.657511   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:17.657566   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.662364   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.667878   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:17.667901   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.716160   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:17.716187   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.770503   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:17.770548   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.836877   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:17.836933   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.881499   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:17.881536   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:17.930792   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:17.930837   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:17.945486   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:17.945519   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:18.087782   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:18.087825   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:18.149272   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:18.149312   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:18.196792   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:18.196829   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:18.243539   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:18.243575   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:18.305424   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:18.305465   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:18.772176   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:18.772213   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:19.916426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.414318   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.728616   51037 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002882 seconds
	I1207 21:21:22.745711   51037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:21:22.772747   51037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:21:23.310807   51037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:21:23.311004   51037 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-950431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:21:23.826933   51037 kubeadm.go:322] [bootstrap-token] Using token: ft70hz.nx8ps5rcldht4kzk
	I1207 21:21:23.828530   51037 out.go:204]   - Configuring RBAC rules ...
	I1207 21:21:23.828676   51037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:21:23.836739   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:21:23.845207   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:21:23.852566   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:21:23.856912   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:21:23.863418   51037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:21:23.881183   51037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:21:24.185664   51037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:21:24.246564   51037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:21:24.246626   51037 kubeadm.go:322] 
	I1207 21:21:24.246741   51037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:21:24.246761   51037 kubeadm.go:322] 
	I1207 21:21:24.246858   51037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:21:24.246868   51037 kubeadm.go:322] 
	I1207 21:21:24.246898   51037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:21:24.246967   51037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:21:24.247047   51037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:21:24.247063   51037 kubeadm.go:322] 
	I1207 21:21:24.247122   51037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:21:24.247132   51037 kubeadm.go:322] 
	I1207 21:21:24.247183   51037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:21:24.247193   51037 kubeadm.go:322] 
	I1207 21:21:24.247259   51037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:21:24.247361   51037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:21:24.247450   51037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:21:24.247461   51037 kubeadm.go:322] 
	I1207 21:21:24.247565   51037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:21:24.247669   51037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:21:24.247678   51037 kubeadm.go:322] 
	I1207 21:21:24.247777   51037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.247910   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:21:24.247941   51037 kubeadm.go:322] 	--control-plane 
	I1207 21:21:24.247951   51037 kubeadm.go:322] 
	I1207 21:21:24.248049   51037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:21:24.248059   51037 kubeadm.go:322] 
	I1207 21:21:24.248150   51037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.248271   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:21:24.249001   51037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:21:24.249031   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:21:24.249041   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:21:24.250938   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:21:21.338084   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:21:21.343250   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:21:21.344871   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:21.344892   51113 api_server.go:131] duration metric: took 4.091697961s to wait for apiserver health ...
	I1207 21:21:21.344901   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:21.344930   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:21.344990   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:21.385908   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:21.385944   51113 cri.go:89] found id: ""
	I1207 21:21:21.385954   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:21.386011   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.390584   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:21.390655   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:21.435206   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:21.435226   51113 cri.go:89] found id: ""
	I1207 21:21:21.435236   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:21.435294   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.441020   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:21.441091   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:21.480294   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:21.480319   51113 cri.go:89] found id: ""
	I1207 21:21:21.480329   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:21.480384   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.484454   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:21.484511   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:21.531792   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:21.531817   51113 cri.go:89] found id: ""
	I1207 21:21:21.531826   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:21.531884   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.536194   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:21.536265   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:21.579784   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:21.579803   51113 cri.go:89] found id: ""
	I1207 21:21:21.579810   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:21.579852   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.583895   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:21.583961   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:21.623350   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:21.623383   51113 cri.go:89] found id: ""
	I1207 21:21:21.623393   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:21.623450   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.628173   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:21.628226   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:21.670522   51113 cri.go:89] found id: ""
	I1207 21:21:21.670549   51113 logs.go:284] 0 containers: []
	W1207 21:21:21.670559   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:21.670565   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:21.670622   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:21.717892   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:21.717918   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:21.717939   51113 cri.go:89] found id: ""
	I1207 21:21:21.717958   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:21.718024   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.724161   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.728796   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:21.728817   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:21.743574   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:21.743599   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:22.158202   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:22.158247   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:22.224569   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:22.224610   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:22.376503   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:22.376539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:22.421207   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:22.421236   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:22.468100   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:22.468130   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:22.514216   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:22.514246   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:22.563190   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:22.563217   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:22.622636   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:22.622673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:22.673280   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:22.673309   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:22.724767   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:22.724799   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:22.787505   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:22.787539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:25.337268   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:25.337297   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.337304   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.337312   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.337319   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.337325   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.337331   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.337338   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.337347   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.337354   51113 system_pods.go:74] duration metric: took 3.99244703s to wait for pod list to return data ...
	I1207 21:21:25.337363   51113 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:25.340607   51113 default_sa.go:45] found service account: "default"
	I1207 21:21:25.340630   51113 default_sa.go:55] duration metric: took 3.261042ms for default service account to be created ...
	I1207 21:21:25.340637   51113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:25.351616   51113 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:25.351640   51113 system_pods.go:89] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.351646   51113 system_pods.go:89] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.351651   51113 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.351656   51113 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.351659   51113 system_pods.go:89] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.351663   51113 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.351670   51113 system_pods.go:89] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.351675   51113 system_pods.go:89] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.351681   51113 system_pods.go:126] duration metric: took 11.04015ms to wait for k8s-apps to be running ...
	I1207 21:21:25.351686   51113 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:25.351725   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:25.368853   51113 system_svc.go:56] duration metric: took 17.156347ms WaitForService to wait for kubelet.
	I1207 21:21:25.368883   51113 kubeadm.go:581] duration metric: took 4m25.557159696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:25.368908   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:25.372224   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:25.372247   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:25.372257   51113 node_conditions.go:105] duration metric: took 3.343495ms to run NodePressure ...
	I1207 21:21:25.372268   51113 start.go:228] waiting for startup goroutines ...
	I1207 21:21:25.372273   51113 start.go:233] waiting for cluster config update ...
	I1207 21:21:25.372282   51113 start.go:242] writing updated cluster config ...
	I1207 21:21:25.372598   51113 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:25.426941   51113 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:25.429177   51113 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-275828" cluster and "default" namespace by default
	I1207 21:21:24.252623   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:21:24.278852   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:21:24.346081   51037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:21:24.346144   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.346161   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=no-preload-950431 minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.458044   51037 ops.go:34] apiserver oom_adj: -16
	I1207 21:21:24.715413   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.801098   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.396467   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.895918   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:26.396185   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.914616   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.915500   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.896260   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.396455   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.896542   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.396551   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.896865   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.395921   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.896782   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.396223   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.896296   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:31.395834   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.414005   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.415580   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.896019   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.395959   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.895826   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.396820   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.896674   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.396109   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.896537   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.396438   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.896709   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.396689   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.896404   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:37.062200   51037 kubeadm.go:1088] duration metric: took 12.716124423s to wait for elevateKubeSystemPrivileges.
	I1207 21:21:37.062237   51037 kubeadm.go:406] StartCluster complete in 5m12.769835709s
	I1207 21:21:37.062255   51037 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.062333   51037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:21:37.064828   51037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.065103   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:21:37.065193   51037 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:21:37.065273   51037 addons.go:69] Setting storage-provisioner=true in profile "no-preload-950431"
	I1207 21:21:37.065291   51037 addons.go:231] Setting addon storage-provisioner=true in "no-preload-950431"
	W1207 21:21:37.065299   51037 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:21:37.065297   51037 addons.go:69] Setting default-storageclass=true in profile "no-preload-950431"
	I1207 21:21:37.065323   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:21:37.065329   51037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950431"
	I1207 21:21:37.065349   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065302   51037 addons.go:69] Setting metrics-server=true in profile "no-preload-950431"
	I1207 21:21:37.065374   51037 addons.go:231] Setting addon metrics-server=true in "no-preload-950431"
	W1207 21:21:37.065388   51037 addons.go:240] addon metrics-server should already be in state true
	I1207 21:21:37.065423   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065737   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065780   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065772   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065821   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.083129   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1207 21:21:37.083593   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I1207 21:21:37.083761   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084047   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084356   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I1207 21:21:37.084566   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084590   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084625   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084645   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084667   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084935   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.084997   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085044   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.085065   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.085381   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085505   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085542   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.085741   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.085909   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085964   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.089134   51037 addons.go:231] Setting addon default-storageclass=true in "no-preload-950431"
	W1207 21:21:37.089153   51037 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:21:37.089180   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.089673   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.089712   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.101048   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1207 21:21:37.101516   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.102279   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.102300   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.102727   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.103618   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.106122   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.107693   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I1207 21:21:37.107843   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I1207 21:21:37.108128   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108521   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108696   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.108709   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.109070   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.109204   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.109227   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.114090   51037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:21:37.109833   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.109949   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.115707   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.115743   51037 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.115765   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:21:37.115789   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.116919   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.119056   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.120429   51037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:21:37.121716   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:21:37.121741   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:21:37.121759   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.119470   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.121830   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.121852   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.120097   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.122062   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.122309   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.122432   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.124738   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.124992   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.125012   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.125346   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.125523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.125647   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.125817   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.136943   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1207 21:21:37.137636   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.138210   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.138233   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.138659   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.138896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.140541   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.140792   51037 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.140808   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:21:37.140824   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.144251   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144616   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.144667   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144856   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.145009   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.145167   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.145260   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.157909   51037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-950431" context rescaled to 1 replicas
	I1207 21:21:37.157965   51037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:21:37.159529   51037 out.go:177] * Verifying Kubernetes components...
	I1207 21:21:33.914686   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:35.916902   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:38.413489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:37.160895   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:37.329265   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:21:37.476842   51037 node_ready.go:35] waiting up to 6m0s for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481433   51037 node_ready.go:49] node "no-preload-950431" has status "Ready":"True"
	I1207 21:21:37.481456   51037 node_ready.go:38] duration metric: took 4.57457ms waiting for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481467   51037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:37.499564   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:37.556110   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:21:37.556142   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:21:37.558917   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.575696   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.653458   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:21:37.653478   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:21:37.782294   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:37.782322   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:21:37.850657   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:38.161232   51037 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1207 21:21:38.734356   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.175402881s)
	I1207 21:21:38.734410   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734420   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734423   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.158690213s)
	I1207 21:21:38.734466   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734482   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734859   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734873   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734860   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.734911   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.734927   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734935   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734913   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735006   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735016   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.735028   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.735166   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735192   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735321   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.735357   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735369   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.772677   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.772700   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.772969   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.773038   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.773055   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.056990   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206289914s)
	I1207 21:21:39.057048   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057064   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057441   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:39.057480   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057502   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057520   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057534   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057809   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057826   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057845   51037 addons.go:467] Verifying addon metrics-server=true in "no-preload-950431"
	I1207 21:21:39.060003   51037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:21:39.061797   51037 addons.go:502] enable addons completed in 1.996609653s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:21:39.690111   51037 pod_ready.go:102] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:40.698712   51037 pod_ready.go:92] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.698739   51037 pod_ready.go:81] duration metric: took 3.199144567s waiting for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.698751   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714087   51037 pod_ready.go:92] pod "coredns-76f75df574-hsjsq" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.714108   51037 pod_ready.go:81] duration metric: took 15.350128ms waiting for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714117   51037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725058   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.725078   51037 pod_ready.go:81] duration metric: took 10.955777ms waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725089   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742099   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.742127   51037 pod_ready.go:81] duration metric: took 17.029172ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742140   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748676   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.748699   51037 pod_ready.go:81] duration metric: took 6.549805ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748713   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988512   51037 pod_ready.go:92] pod "kube-proxy-6v8td" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:41.988537   51037 pod_ready.go:81] duration metric: took 1.239816309s waiting for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988545   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283301   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:42.283330   51037 pod_ready.go:81] duration metric: took 294.777559ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283341   51037 pod_ready.go:38] duration metric: took 4.801864648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:42.283360   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:42.283420   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:42.308983   51037 api_server.go:72] duration metric: took 5.150987572s to wait for apiserver process to appear ...
	I1207 21:21:42.309013   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:42.309036   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:21:42.315006   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:21:42.316220   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:21:42.316240   51037 api_server.go:131] duration metric: took 7.219959ms to wait for apiserver health ...
	I1207 21:21:42.316247   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:42.485186   51037 system_pods.go:59] 9 kube-system pods found
	I1207 21:21:42.485214   51037 system_pods.go:61] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.485218   51037 system_pods.go:61] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.485223   51037 system_pods.go:61] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.485228   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.485234   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.485237   51037 system_pods.go:61] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.485242   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.485251   51037 system_pods.go:61] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.485258   51037 system_pods.go:61] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.485278   51037 system_pods.go:74] duration metric: took 169.025303ms to wait for pod list to return data ...
	I1207 21:21:42.485287   51037 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:42.680542   51037 default_sa.go:45] found service account: "default"
	I1207 21:21:42.680569   51037 default_sa.go:55] duration metric: took 195.272707ms for default service account to be created ...
	I1207 21:21:42.680577   51037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:42.890877   51037 system_pods.go:86] 9 kube-system pods found
	I1207 21:21:42.890927   51037 system_pods.go:89] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.890933   51037 system_pods.go:89] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.890938   51037 system_pods.go:89] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.890942   51037 system_pods.go:89] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.890946   51037 system_pods.go:89] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.890950   51037 system_pods.go:89] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.890954   51037 system_pods.go:89] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.890960   51037 system_pods.go:89] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.890965   51037 system_pods.go:89] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.890973   51037 system_pods.go:126] duration metric: took 210.38383ms to wait for k8s-apps to be running ...
	I1207 21:21:42.890979   51037 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:42.891021   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:42.907279   51037 system_svc.go:56] duration metric: took 16.290689ms WaitForService to wait for kubelet.
	I1207 21:21:42.907306   51037 kubeadm.go:581] duration metric: took 5.749318034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:42.907328   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:43.081361   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:43.081390   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:43.081401   51037 node_conditions.go:105] duration metric: took 174.067442ms to run NodePressure ...
	I1207 21:21:43.081412   51037 start.go:228] waiting for startup goroutines ...
	I1207 21:21:43.081420   51037 start.go:233] waiting for cluster config update ...
	I1207 21:21:43.081433   51037 start.go:242] writing updated cluster config ...
	I1207 21:21:43.081691   51037 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:43.131409   51037 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1207 21:21:43.133483   51037 out.go:177] * Done! kubectl is now configured to use "no-preload-950431" cluster and "default" namespace by default
	I1207 21:21:40.414676   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:42.913795   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:44.914599   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:47.414431   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:49.913391   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:51.914426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:53.915196   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:55.923342   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:58.413783   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:00.414241   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:02.414435   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:04.913358   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:06.913909   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:08.915098   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:11.414320   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:13.414489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:15.913521   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:18.415215   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:19.107244   50270 pod_ready.go:81] duration metric: took 4m0.000150933s waiting for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:19.107300   50270 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:22:19.107323   50270 pod_ready.go:38] duration metric: took 4m1.199790563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:19.107355   50270 kubeadm.go:640] restartCluster took 5m20.261390035s
	W1207 21:22:19.107437   50270 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:22:19.107470   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:22:26.124587   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (7.017092462s)
	I1207 21:22:26.124664   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:26.139323   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:22:26.150243   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:22:26.164289   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:22:26.164356   50270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 21:22:26.390137   50270 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:22:39.046001   50270 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1207 21:22:39.046063   50270 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:22:39.046164   50270 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:22:39.046322   50270 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:22:39.046454   50270 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:22:39.046581   50270 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:22:39.046685   50270 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:22:39.046759   50270 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1207 21:22:39.046836   50270 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:22:39.048426   50270 out.go:204]   - Generating certificates and keys ...
	I1207 21:22:39.048532   50270 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:22:39.048617   50270 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:22:39.048713   50270 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:22:39.048808   50270 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:22:39.048899   50270 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:22:39.048977   50270 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:22:39.049066   50270 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:22:39.049151   50270 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:22:39.049254   50270 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:22:39.049341   50270 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:22:39.049396   50270 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:22:39.049496   50270 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:22:39.049578   50270 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:22:39.049671   50270 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:22:39.049758   50270 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:22:39.049829   50270 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:22:39.049884   50270 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:22:39.051499   50270 out.go:204]   - Booting up control plane ...
	I1207 21:22:39.051604   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:22:39.051706   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:22:39.051778   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:22:39.051841   50270 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:22:39.052043   50270 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:22:39.052137   50270 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502878 seconds
	I1207 21:22:39.052296   50270 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:22:39.052458   50270 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:22:39.052537   50270 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:22:39.052714   50270 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-483745 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 21:22:39.052802   50270 kubeadm.go:322] [bootstrap-token] Using token: 88595b.vk24k0k7lcyxvxlg
	I1207 21:22:39.054142   50270 out.go:204]   - Configuring RBAC rules ...
	I1207 21:22:39.054250   50270 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:22:39.054369   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:22:39.054470   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:22:39.054565   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:22:39.054675   50270 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:22:39.054740   50270 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:22:39.054805   50270 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:22:39.054813   50270 kubeadm.go:322] 
	I1207 21:22:39.054905   50270 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:22:39.054917   50270 kubeadm.go:322] 
	I1207 21:22:39.054996   50270 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:22:39.055004   50270 kubeadm.go:322] 
	I1207 21:22:39.055031   50270 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:22:39.055107   50270 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:22:39.055174   50270 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:22:39.055187   50270 kubeadm.go:322] 
	I1207 21:22:39.055254   50270 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:22:39.055366   50270 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:22:39.055467   50270 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:22:39.055476   50270 kubeadm.go:322] 
	I1207 21:22:39.055565   50270 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1207 21:22:39.055655   50270 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:22:39.055663   50270 kubeadm.go:322] 
	I1207 21:22:39.055776   50270 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.055929   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:22:39.055969   50270 kubeadm.go:322]     --control-plane 	  
	I1207 21:22:39.055979   50270 kubeadm.go:322] 
	I1207 21:22:39.056099   50270 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:22:39.056111   50270 kubeadm.go:322] 
	I1207 21:22:39.056215   50270 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.056371   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:22:39.056402   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:22:39.056414   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:22:39.058073   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:22:39.059659   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:22:39.078052   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:22:39.118479   50270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:22:39.118540   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=old-k8s-version-483745 minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.118551   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.149391   50270 ops.go:34] apiserver oom_adj: -16
	I1207 21:22:39.334606   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.476182   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.075027   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.574693   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.074497   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.575214   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.075168   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.575162   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.074671   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.575406   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.074823   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.574597   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.575119   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.075437   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.575138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.575171   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.074939   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.574679   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.075065   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.574571   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.074553   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.575129   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.075320   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.574806   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.075136   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.575144   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.075139   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.575394   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.075185   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.274051   50270 kubeadm.go:1088] duration metric: took 15.155559482s to wait for elevateKubeSystemPrivileges.
	I1207 21:22:54.274092   50270 kubeadm.go:406] StartCluster complete in 5m55.488226201s
	I1207 21:22:54.274140   50270 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.274247   50270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:22:54.276679   50270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.276902   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:22:54.276991   50270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:22:54.277064   50270 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277090   50270 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-483745"
	W1207 21:22:54.277103   50270 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:22:54.277101   50270 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277089   50270 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277116   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:22:54.277127   50270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-483745"
	I1207 21:22:54.277152   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277119   50270 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-483745"
	W1207 21:22:54.277169   50270 addons.go:240] addon metrics-server should already be in state true
	I1207 21:22:54.277208   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277529   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277564   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277573   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277581   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277591   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277612   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.293696   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1207 21:22:54.293908   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1207 21:22:54.294118   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.294622   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.294642   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.294656   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.295100   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.295119   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.295182   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295512   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295671   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.295709   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1207 21:22:54.295752   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.295791   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.296131   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.296662   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.296681   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.297077   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.297597   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.297635   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.299605   50270 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-483745"
	W1207 21:22:54.299630   50270 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:22:54.299658   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.300047   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.300087   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.314531   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1207 21:22:54.315168   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.315718   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.315804   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1207 21:22:54.315809   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.316447   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.316491   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.316657   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.316979   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.317005   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.317340   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.317887   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.317945   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.319086   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.321272   50270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:22:54.320074   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1207 21:22:54.322834   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:22:54.322849   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:22:54.322863   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.323218   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.323677   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.323689   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.323997   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.324166   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.326460   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.328172   50270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:22:54.327148   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.328366   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.329567   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.329588   50270 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.329593   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.329600   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:22:54.329613   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.329725   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.329909   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.330088   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.333435   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334161   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.334192   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334480   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.334786   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.334959   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.335091   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.336340   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I1207 21:22:54.336672   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.337021   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.337034   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.337316   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.337486   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.338808   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.339043   50270 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.339053   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:22:54.339064   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.341591   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.341937   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.341960   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.342127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.342285   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.342453   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.342592   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.385908   50270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-483745" context rescaled to 1 replicas
	I1207 21:22:54.385959   50270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:22:54.387637   50270 out.go:177] * Verifying Kubernetes components...
	I1207 21:22:54.388616   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:54.604286   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.671574   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:22:54.671601   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:22:54.752688   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:22:54.752714   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:22:54.792943   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.847458   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.847489   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:22:54.916698   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.931860   50270 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:54.931924   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:22:55.152010   50270 node_ready.go:49] node "old-k8s-version-483745" has status "Ready":"True"
	I1207 21:22:55.152041   50270 node_ready.go:38] duration metric: took 220.147741ms waiting for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:55.152055   50270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:55.356283   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:55.654243   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049922238s)
	I1207 21:22:55.654296   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654313   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.654661   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.654687   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.654694   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.654703   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654715   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.655010   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.655052   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.693855   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.693876   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.694176   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.694197   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.927642   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13465835s)
	I1207 21:22:55.927714   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.927731   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928056   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928076   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.928087   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.928096   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.928413   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928428   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.033797   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117050773s)
	I1207 21:22:56.033845   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.101898699s)
	I1207 21:22:56.033881   50270 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1207 21:22:56.033850   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.033918   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034207   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034220   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034229   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.034236   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034460   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034480   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034516   50270 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-483745"
	I1207 21:22:56.036701   50270 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1207 21:22:56.038078   50270 addons.go:502] enable addons completed in 1.76109636s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1207 21:22:57.718454   50270 pod_ready.go:102] pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:58.708880   50270 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708910   50270 pod_ready.go:81] duration metric: took 3.352602717s waiting for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:58.708920   50270 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708930   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715179   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.715205   50270 pod_ready.go:81] duration metric: took 6.268335ms waiting for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715219   50270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720511   50270 pod_ready.go:92] pod "kube-proxy-42fzb" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.720526   50270 pod_ready.go:81] duration metric: took 5.302238ms waiting for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720544   50270 pod_ready.go:38] duration metric: took 3.568467628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:58.720558   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:22:58.720609   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:22:58.737687   50270 api_server.go:72] duration metric: took 4.351680673s to wait for apiserver process to appear ...
	I1207 21:22:58.737712   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:22:58.737730   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:22:58.744722   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:22:58.745867   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:22:58.745887   50270 api_server.go:131] duration metric: took 8.167725ms to wait for apiserver health ...
	I1207 21:22:58.745897   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:22:58.750259   50270 system_pods.go:59] 4 kube-system pods found
	I1207 21:22:58.750278   50270 system_pods.go:61] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.750283   50270 system_pods.go:61] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.750292   50270 system_pods.go:61] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.750306   50270 system_pods.go:61] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.750319   50270 system_pods.go:74] duration metric: took 4.415504ms to wait for pod list to return data ...
	I1207 21:22:58.750328   50270 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:22:58.753151   50270 default_sa.go:45] found service account: "default"
	I1207 21:22:58.753173   50270 default_sa.go:55] duration metric: took 2.836309ms for default service account to be created ...
	I1207 21:22:58.753181   50270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:22:58.757164   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.757188   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.757195   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.757212   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.757223   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.757246   50270 retry.go:31] will retry after 195.542562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:58.957411   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.957443   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.957451   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.957461   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.957471   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.957494   50270 retry.go:31] will retry after 294.291725ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.264559   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.264599   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.264608   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.264620   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.264632   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.264651   50270 retry.go:31] will retry after 392.704433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.663939   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.663967   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.663973   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.663979   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.663985   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.664003   50270 retry.go:31] will retry after 598.787872ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.268415   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.268441   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.268447   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.268453   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.268458   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.268472   50270 retry.go:31] will retry after 554.6659ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.829267   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.829293   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.829299   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.829305   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.829309   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.829325   50270 retry.go:31] will retry after 832.708436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:01.667497   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:01.667526   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:01.667532   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:01.667539   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:01.667543   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:01.667560   50270 retry.go:31] will retry after 824.504206ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:02.497009   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:02.497033   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:02.497038   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:02.497045   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:02.497049   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:02.497064   50270 retry.go:31] will retry after 1.335460815s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:03.837788   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:03.837816   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:03.837821   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:03.837828   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:03.837833   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:03.837848   50270 retry.go:31] will retry after 1.185883705s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:05.028679   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:05.028712   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:05.028721   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:05.028731   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:05.028738   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:05.028758   50270 retry.go:31] will retry after 2.162817833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:07.196435   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:07.196468   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:07.196476   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:07.196485   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:07.196493   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:07.196512   50270 retry.go:31] will retry after 2.853202831s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:10.054277   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:10.054303   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:10.054308   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:10.054315   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:10.054320   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:10.054335   50270 retry.go:31] will retry after 3.392213767s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:13.452019   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:13.452046   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:13.452052   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:13.452059   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:13.452064   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:13.452081   50270 retry.go:31] will retry after 3.42315118s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:16.882830   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:16.882856   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:16.882861   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:16.882868   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:16.882873   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:16.882887   50270 retry.go:31] will retry after 3.42232982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:20.310740   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:20.310766   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:20.310771   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:20.310780   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:20.310785   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:20.310801   50270 retry.go:31] will retry after 6.110306117s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:26.426492   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:26.426520   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:26.426525   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:26.426532   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:26.426537   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:26.426554   50270 retry.go:31] will retry after 5.458076236s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:31.890544   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:31.890575   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:31.890580   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:31.890589   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:31.890593   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:31.890611   50270 retry.go:31] will retry after 10.030622922s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:41.928589   50270 system_pods.go:86] 6 kube-system pods found
	I1207 21:23:41.928622   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:41.928630   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:41.928637   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:41.928642   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:41.928651   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:41.928659   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:41.928677   50270 retry.go:31] will retry after 11.183539963s: missing components: kube-controller-manager, kube-scheduler
	I1207 21:23:53.119257   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:23:53.119284   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:53.119292   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:53.119298   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:53.119304   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Pending
	I1207 21:23:53.119309   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:53.119315   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:23:53.119325   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:53.119332   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:53.119353   50270 retry.go:31] will retry after 13.123307809s: missing components: kube-controller-manager
	I1207 21:24:06.249016   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:24:06.249042   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:24:06.249048   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:24:06.249054   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:24:06.249059   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Running
	I1207 21:24:06.249064   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:24:06.249068   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:24:06.249074   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:24:06.249079   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:24:06.249087   50270 system_pods.go:126] duration metric: took 1m7.495900916s to wait for k8s-apps to be running ...
	I1207 21:24:06.249092   50270 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:24:06.249137   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:24:06.265801   50270 system_svc.go:56] duration metric: took 16.700976ms WaitForService to wait for kubelet.
	I1207 21:24:06.265820   50270 kubeadm.go:581] duration metric: took 1m11.879821949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:24:06.265837   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:24:06.269326   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:24:06.269346   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:24:06.269356   50270 node_conditions.go:105] duration metric: took 3.51576ms to run NodePressure ...
	I1207 21:24:06.269366   50270 start.go:228] waiting for startup goroutines ...
	I1207 21:24:06.269371   50270 start.go:233] waiting for cluster config update ...
	I1207 21:24:06.269384   50270 start.go:242] writing updated cluster config ...
	I1207 21:24:06.269660   50270 ssh_runner.go:195] Run: rm -f paused
	I1207 21:24:06.317992   50270 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1207 21:24:06.320122   50270 out.go:177] 
	W1207 21:24:06.321437   50270 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1207 21:24:06.322708   50270 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1207 21:24:06.324092   50270 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-483745" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:15:32 UTC, ends at Thu 2023-12-07 21:30:03 UTC. --
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.021010290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984603020992485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5603b7ac-eadc-4c0b-b7f3-decb6c00fc5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.021530675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=638dcd4e-744d-431d-a564-8cbe7c427923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.021611353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=638dcd4e-744d-431d-a564-8cbe7c427923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.021763761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=638dcd4e-744d-431d-a564-8cbe7c427923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.068538781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=87f47df5-e679-44f9-9664-834dea9d88e1 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.068618844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=87f47df5-e679-44f9-9664-834dea9d88e1 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.070396219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3adcd1ba-b7d0-4725-ad6e-59871028553e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.070853462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984603070832422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3adcd1ba-b7d0-4725-ad6e-59871028553e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.071657819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7deaaa01-9245-4ed0-a45a-6bbb48ccc1e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.071736560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7deaaa01-9245-4ed0-a45a-6bbb48ccc1e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.071989707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7deaaa01-9245-4ed0-a45a-6bbb48ccc1e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.117542609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=75617e9d-fd5e-4841-88b2-ed9232c1cb24 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.117624696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=75617e9d-fd5e-4841-88b2-ed9232c1cb24 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.119033377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9bbe940b-c664-4211-a27f-2d37bb71658b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.119526891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984603119511172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9bbe940b-c664-4211-a27f-2d37bb71658b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.120138445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=420c26c7-d1e8-4cbd-8a8c-2639d8851ffd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.120210965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=420c26c7-d1e8-4cbd-8a8c-2639d8851ffd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.120412254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=420c26c7-d1e8-4cbd-8a8c-2639d8851ffd name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.162434027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2829d7dd-9b54-4ced-825d-ff107738c14f name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.162621511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2829d7dd-9b54-4ced-825d-ff107738c14f name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.163695373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b444d94c-8cac-4530-aa47-912ea9f90800 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.164332871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984603164318356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b444d94c-8cac-4530-aa47-912ea9f90800 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.165118375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ae2e4df8-7ef3-49ea-b5b7-d6fe33b30fa6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.165187947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ae2e4df8-7ef3-49ea-b5b7-d6fe33b30fa6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:03 embed-certs-598346 crio[714]: time="2023-12-07 21:30:03.165395795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ae2e4df8-7ef3-49ea-b5b7-d6fe33b30fa6 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55d67718482d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   71157e5ee49d2       storage-provisioner
	79a33dbfd0cb2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   7ae1c8c92da2d       kube-proxy-h4pmv
	0bd79a03ef1e5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   b34bf4aa578ec       coredns-5dd5756b68-nllk7
	b9e322f9a929a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   31060c233cb86       kube-scheduler-embed-certs-598346
	a89e2a11dada3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   487e5cee31477       etcd-embed-certs-598346
	ee851783f696a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   e8940552f6ea0       kube-controller-manager-embed-certs-598346
	ac812ba232eb7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   800c608688cf7       kube-apiserver-embed-certs-598346
	
	* 
	* ==> coredns [0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47206 - 45525 "HINFO IN 4812590669896354982.6400754222289715007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014442929s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-598346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-598346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=embed-certs-598346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:20:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-598346
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:29:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:26:09 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:26:09 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:26:09 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:26:09 +0000   Thu, 07 Dec 2023 21:20:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    embed-certs-598346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c4331d3ecf844d2a32645f7c532352b
	  System UUID:                1c4331d3-ecf8-44d2-a326-45f7c532352b
	  Boot ID:                    06bf7769-9b17-4760-b917-c8bbfc301f7f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nllk7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-598346                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-598346             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-598346    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-h4pmv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-598346             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-pstg2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m29s (x8 over 9m29s)  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s (x8 over 9m29s)  kubelet          Node embed-certs-598346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s (x7 over 9m29s)  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-598346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s                  kubelet          Node embed-certs-598346 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s                  kubelet          Node embed-certs-598346 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-598346 event: Registered Node embed-certs-598346 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066662] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.343275] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.391305] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150890] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.624765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.967299] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.112429] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.155791] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.107350] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.214278] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.300988] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Dec 7 21:16] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 7 21:20] systemd-fstab-generator[3536]: Ignoring "noauto" for root device
	[  +9.308056] systemd-fstab-generator[3863]: Ignoring "noauto" for root device
	[ +13.357384] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b] <==
	* {"level":"info","ts":"2023-12-07T21:20:37.404622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=(728820823681708824)"}
	{"level":"info","ts":"2023-12-07T21:20:37.404788Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2023-12-07T21:20:37.418799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-07T21:20:37.427679Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a1d4aad7c74b318","initial-advertise-peer-urls":["https://192.168.72.180:2380"],"listen-peer-urls":["https://192.168.72.180:2380"],"advertise-client-urls":["https://192.168.72.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T21:20:37.427814Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T21:20:37.419001Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2023-12-07T21:20:37.428109Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2023-12-07T21:20:37.550157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-07T21:20:37.550507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T21:20:37.550621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 1"}
	{"level":"info","ts":"2023-12-07T21:20:37.550656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.550734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.55082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.550849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.553807Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.554557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:20:37.555683Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	{"level":"info","ts":"2023-12-07T21:20:37.556051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:20:37.556858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:20:37.559268Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.560118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.560208Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.559502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:20:37.560397Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:20:37.554497Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:embed-certs-598346 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	
	* 
	* ==> kernel <==
	*  21:30:03 up 14 min,  0 users,  load average: 0.10, 0.20, 0.17
	Linux embed-certs-598346 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057] <==
	* W1207 21:25:40.903060       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:25:40.903127       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:25:40.903138       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:25:40.903286       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:25:40.903429       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:25:40.904359       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:26:39.732381       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:26:40.903769       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:26:40.903982       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:26:40.904025       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:26:40.904988       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:26:40.905062       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:26:40.905070       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:27:39.732570       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1207 21:28:39.732553       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:28:40.904371       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:28:40.904552       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:28:40.904635       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:28:40.905703       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:28:40.905833       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:28:40.905863       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:29:39.732670       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458] <==
	* I1207 21:24:26.437700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.873µs"
	E1207 21:24:54.876440       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:24:55.332655       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:25:24.882695       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:25:25.343070       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:25:54.888474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:25:55.351719       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:26:24.893355       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:25.360015       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:26:52.443245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="579.303µs"
	E1207 21:26:54.902313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:55.370161       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:27:03.447550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="300.801µs"
	E1207 21:27:24.908544       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:25.379677       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:27:54.914257       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:55.388126       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:28:24.920301       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:25.402273       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:28:54.928254       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:55.412715       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:24.934413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:25.432190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:54.940294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:55.441410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae] <==
	* I1207 21:20:59.149621       1 server_others.go:69] "Using iptables proxy"
	I1207 21:20:59.266730       1 node.go:141] Successfully retrieved node IP: 192.168.72.180
	I1207 21:20:59.593349       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:20:59.593408       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:20:59.664052       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:20:59.666364       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:20:59.666564       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:20:59.666782       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:20:59.671863       1 config.go:188] "Starting service config controller"
	I1207 21:20:59.672370       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:20:59.672789       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:20:59.672954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:20:59.679748       1 config.go:315] "Starting node config controller"
	I1207 21:20:59.679793       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:20:59.773691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:20:59.773794       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:20:59.780428       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99] <==
	* W1207 21:20:39.944461       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:20:39.944659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:20:39.944677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:20:39.944685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:20:39.946168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:20:39.946220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:20:40.796577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:20:40.796675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:20:40.850503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:20:40.850610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:20:40.930281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:20:40.930337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1207 21:20:40.934463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:20:40.934551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:20:40.956484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:20:40.956586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:20:41.046059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:20:41.046147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:20:41.143445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:20:41.143640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:20:41.184003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:20:41.184056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 21:20:41.274993       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 21:20:41.275073       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1207 21:20:43.774407       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:15:32 UTC, ends at Thu 2023-12-07 21:30:03 UTC. --
	Dec 07 21:27:27 embed-certs-598346 kubelet[3870]: E1207 21:27:27.422677    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:27:39 embed-certs-598346 kubelet[3870]: E1207 21:27:39.424083    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:27:43 embed-certs-598346 kubelet[3870]: E1207 21:27:43.524543    3870 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:27:43 embed-certs-598346 kubelet[3870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:27:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:27:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:27:53 embed-certs-598346 kubelet[3870]: E1207 21:27:53.423202    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:28:05 embed-certs-598346 kubelet[3870]: E1207 21:28:05.423060    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:28:17 embed-certs-598346 kubelet[3870]: E1207 21:28:17.422435    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:28:32 embed-certs-598346 kubelet[3870]: E1207 21:28:32.422677    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:28:43 embed-certs-598346 kubelet[3870]: E1207 21:28:43.431531    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:28:43 embed-certs-598346 kubelet[3870]: E1207 21:28:43.523754    3870 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:28:43 embed-certs-598346 kubelet[3870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:28:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:28:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:28:57 embed-certs-598346 kubelet[3870]: E1207 21:28:57.423560    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:29:08 embed-certs-598346 kubelet[3870]: E1207 21:29:08.422855    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:29:21 embed-certs-598346 kubelet[3870]: E1207 21:29:21.423203    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:29:35 embed-certs-598346 kubelet[3870]: E1207 21:29:35.422300    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:29:43 embed-certs-598346 kubelet[3870]: E1207 21:29:43.523485    3870 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:29:43 embed-certs-598346 kubelet[3870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:29:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:29:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:29:46 embed-certs-598346 kubelet[3870]: E1207 21:29:46.422853    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:29:57 embed-certs-598346 kubelet[3870]: E1207 21:29:57.423143    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	
	* 
	* ==> storage-provisioner [55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7] <==
	* I1207 21:21:00.077379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:21:00.092380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:21:00.092666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:21:00.104745       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:21:00.106160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f730826-a22c-4e32-bb83-4169ecd2820a", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497 became leader
	I1207 21:21:00.106222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497!
	I1207 21:21:00.207069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598346 -n embed-certs-598346
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-598346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pstg2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2: exit status 1 (65.943434ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pstg2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:21:41.699708   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:30:26.04141434 +0000 UTC m=+5350.209559315
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-275828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-275828 logs -n 25: (1.625384733s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-620116 -- sudo                         | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-620116                                 | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:12:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:12:54.827966   51113 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:12:54.828121   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828131   51113 out.go:309] Setting ErrFile to fd 2...
	I1207 21:12:54.828138   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828309   51113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:12:54.828894   51113 out.go:303] Setting JSON to false
	I1207 21:12:54.829778   51113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6921,"bootTime":1701976654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:12:54.829872   51113 start.go:138] virtualization: kvm guest
	I1207 21:12:54.832359   51113 out.go:177] * [default-k8s-diff-port-275828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:12:54.833958   51113 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:12:54.833997   51113 notify.go:220] Checking for updates...
	I1207 21:12:54.835484   51113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:12:54.837345   51113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:12:54.838716   51113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:12:54.840105   51113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:12:54.841497   51113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:12:54.843170   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:12:54.843587   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.843638   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.857987   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I1207 21:12:54.858345   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.858826   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.858846   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.859141   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.859317   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.859528   51113 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:12:54.859797   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.859827   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.873523   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1207 21:12:54.873866   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.874374   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.874399   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.874726   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.874907   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.906909   51113 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:12:54.908496   51113 start.go:298] selected driver: kvm2
	I1207 21:12:54.908515   51113 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.908626   51113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:12:54.909287   51113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.909431   51113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:12:54.924711   51113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:12:54.925077   51113 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:12:54.925136   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:12:54.925149   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:12:54.925174   51113 start_flags.go:323] config:
	{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-27582
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.925311   51113 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.927216   51113 out.go:177] * Starting control plane node default-k8s-diff-port-275828 in cluster default-k8s-diff-port-275828
	I1207 21:12:51.859250   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:12:51.859366   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:12:51.859440   51037 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859507   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:12:51.859492   51037 cache.go:107] acquiring lock: {Name:mk57eae37995939df6ffd0df03832314e9e6100e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859493   51037 cache.go:107] acquiring lock: {Name:mk5a91936dc04372c96de7514149d2b4b0d17dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859522   51037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.402µs
	I1207 21:12:51.859538   51037 cache.go:107] acquiring lock: {Name:mk4c716c1104ca016c5e335d1cbf204f19d0197f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859560   51037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:12:51.859581   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 21:12:51.859591   51037 start.go:365] acquiring machines lock for no-preload-950431: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:12:51.859593   51037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 111.482µs
	I1207 21:12:51.859611   51037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859596   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 21:12:51.859564   51037 cache.go:107] acquiring lock: {Name:mke02250ffd1d3b6fb4470dd05093397053b289d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859627   51037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 139.857µs
	I1207 21:12:51.859637   51037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859588   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 21:12:51.859647   51037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 112.196µs
	I1207 21:12:51.859621   51037 cache.go:107] acquiring lock: {Name:mk2a1c8afaf74efaf0daac8bf102ee63aa4b5154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859664   51037 cache.go:107] acquiring lock: {Name:mk042626599761dccdc47fcf8ee95d59d24917b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859660   51037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 21:12:51.859443   51037 cache.go:107] acquiring lock: {Name:mk69e12850117516cff168d811605a739d29808c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859701   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 21:12:51.859715   51037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 185.872µs
	I1207 21:12:51.859736   51037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 21:12:51.859728   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 21:12:51.859750   51037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 313.668µs
	I1207 21:12:51.859758   51037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859796   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 21:12:51.859809   51037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 179.42µs
	I1207 21:12:51.859823   51037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859808   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1207 21:12:51.859910   51037 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 310.345µs
	I1207 21:12:51.859931   51037 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1207 21:12:51.859947   51037 cache.go:87] Successfully saved all images to host disk.
	I1207 21:12:57.714205   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:12:54.928473   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:12:54.928503   51113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:12:54.928516   51113 cache.go:56] Caching tarball of preloaded images
	I1207 21:12:54.928608   51113 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:12:54.928621   51113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:12:54.928718   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:12:54.928893   51113 start.go:365] acquiring machines lock for default-k8s-diff-port-275828: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:13:00.786234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:06.866234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:09.938211   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:16.018206   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:19.090196   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:25.170164   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:28.242299   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:34.322194   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:37.394241   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:43.474183   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:46.546186   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:52.626214   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:55.698176   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:01.778218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:04.850228   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:10.930239   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:14.002222   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:20.082270   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:23.154237   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:29.234226   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:32.306242   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:38.386218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:41.458157   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:47.538219   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:50.610223   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:56.690260   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:59.766215   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:05.842220   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:08.914154   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:14.994193   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:18.066232   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:21.070365   50624 start.go:369] acquired machines lock for "embed-certs-598346" in 3m44.734224905s
	I1207 21:15:21.070421   50624 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:21.070427   50624 fix.go:54] fixHost starting: 
	I1207 21:15:21.070755   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:21.070787   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:21.085298   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I1207 21:15:21.085643   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:21.086150   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:15:21.086172   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:21.086491   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:21.086681   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:21.086828   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:15:21.088256   50624 fix.go:102] recreateIfNeeded on embed-certs-598346: state=Stopped err=<nil>
	I1207 21:15:21.088283   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	W1207 21:15:21.088465   50624 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:21.090020   50624 out.go:177] * Restarting existing kvm2 VM for "embed-certs-598346" ...
	I1207 21:15:21.091364   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Start
	I1207 21:15:21.091521   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:15:21.092215   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:15:21.092551   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:15:21.092938   50624 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:15:21.093647   50624 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:15:21.067977   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:21.068024   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:15:21.070214   50270 machine.go:91] provisioned docker machine in 4m37.409386757s
	I1207 21:15:21.070272   50270 fix.go:56] fixHost completed within 4m37.430493841s
	I1207 21:15:21.070280   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 4m37.43051315s
	W1207 21:15:21.070299   50270 start.go:694] error starting host: provision: host is not running
	W1207 21:15:21.070399   50270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 21:15:21.070408   50270 start.go:709] Will try again in 5 seconds ...
	I1207 21:15:22.319220   50624 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:15:22.320059   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.320432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.320505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.320416   51516 retry.go:31] will retry after 306.732639ms: waiting for machine to come up
	I1207 21:15:22.629026   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.629495   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.629523   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.629465   51516 retry.go:31] will retry after 244.665765ms: waiting for machine to come up
	I1207 21:15:22.875896   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.876248   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.876275   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.876210   51516 retry.go:31] will retry after 389.522298ms: waiting for machine to come up
	I1207 21:15:23.267728   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.268119   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.268140   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.268064   51516 retry.go:31] will retry after 521.34699ms: waiting for machine to come up
	I1207 21:15:23.790614   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.791043   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.791067   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.791002   51516 retry.go:31] will retry after 493.71234ms: waiting for machine to come up
	I1207 21:15:24.286698   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:24.287121   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:24.287145   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:24.287061   51516 retry.go:31] will retry after 736.984501ms: waiting for machine to come up
	I1207 21:15:25.025941   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:25.026294   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:25.026317   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:25.026256   51516 retry.go:31] will retry after 1.06643424s: waiting for machine to come up
	I1207 21:15:26.093760   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:26.094266   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:26.094306   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:26.094211   51516 retry.go:31] will retry after 1.226791228s: waiting for machine to come up
	I1207 21:15:26.072827   50270 start.go:365] acquiring machines lock for old-k8s-version-483745: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:15:27.322536   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:27.322912   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:27.322940   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:27.322857   51516 retry.go:31] will retry after 1.246504696s: waiting for machine to come up
	I1207 21:15:28.571241   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:28.571651   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:28.571677   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:28.571606   51516 retry.go:31] will retry after 2.084958391s: waiting for machine to come up
	I1207 21:15:30.658654   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:30.659047   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:30.659080   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:30.658990   51516 retry.go:31] will retry after 2.104944011s: waiting for machine to come up
	I1207 21:15:32.765669   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:32.766136   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:32.766167   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:32.766076   51516 retry.go:31] will retry after 3.05038185s: waiting for machine to come up
	I1207 21:15:35.819082   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:35.819446   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:35.819477   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:35.819399   51516 retry.go:31] will retry after 3.445969037s: waiting for machine to come up
	I1207 21:15:40.686593   51037 start.go:369] acquired machines lock for "no-preload-950431" in 2m48.82697748s
	I1207 21:15:40.686639   51037 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:40.686646   51037 fix.go:54] fixHost starting: 
	I1207 21:15:40.687011   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:40.687043   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:40.703294   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I1207 21:15:40.703682   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:40.704245   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:15:40.704276   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:40.704620   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:40.704792   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:15:40.704938   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:15:40.706394   51037 fix.go:102] recreateIfNeeded on no-preload-950431: state=Stopped err=<nil>
	I1207 21:15:40.706420   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	W1207 21:15:40.706593   51037 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:40.709148   51037 out.go:177] * Restarting existing kvm2 VM for "no-preload-950431" ...
	I1207 21:15:39.269367   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269776   50624 main.go:141] libmachine: (embed-certs-598346) Found IP for machine: 192.168.72.180
	I1207 21:15:39.269802   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has current primary IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269808   50624 main.go:141] libmachine: (embed-certs-598346) Reserving static IP address...
	I1207 21:15:39.270234   50624 main.go:141] libmachine: (embed-certs-598346) Reserved static IP address: 192.168.72.180
	I1207 21:15:39.270265   50624 main.go:141] libmachine: (embed-certs-598346) Waiting for SSH to be available...
	I1207 21:15:39.270279   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.270308   50624 main.go:141] libmachine: (embed-certs-598346) DBG | skip adding static IP to network mk-embed-certs-598346 - found existing host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"}
	I1207 21:15:39.270325   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Getting to WaitForSSH function...
	I1207 21:15:39.272292   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272639   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.272674   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272773   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH client type: external
	I1207 21:15:39.272827   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa (-rw-------)
	I1207 21:15:39.272869   50624 main.go:141] libmachine: (embed-certs-598346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:15:39.272887   50624 main.go:141] libmachine: (embed-certs-598346) DBG | About to run SSH command:
	I1207 21:15:39.272903   50624 main.go:141] libmachine: (embed-certs-598346) DBG | exit 0
	I1207 21:15:39.363326   50624 main.go:141] libmachine: (embed-certs-598346) DBG | SSH cmd err, output: <nil>: 
	I1207 21:15:39.363757   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:15:39.364301   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.366828   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367157   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.367206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367459   50624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:15:39.367693   50624 machine.go:88] provisioning docker machine ...
	I1207 21:15:39.367713   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:39.367918   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368085   50624 buildroot.go:166] provisioning hostname "embed-certs-598346"
	I1207 21:15:39.368104   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368241   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.370443   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.370771   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.370798   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.371044   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.371192   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371358   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371507   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.371660   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.372058   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.372078   50624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-598346 && echo "embed-certs-598346" | sudo tee /etc/hostname
	I1207 21:15:39.498370   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-598346
	
	I1207 21:15:39.498394   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.501284   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501691   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.501711   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501952   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.502135   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502267   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502432   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.502604   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.503052   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.503091   50624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-598346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-598346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-598346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:15:39.625683   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:39.625713   50624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:15:39.625735   50624 buildroot.go:174] setting up certificates
	I1207 21:15:39.625748   50624 provision.go:83] configureAuth start
	I1207 21:15:39.625760   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.626074   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.628753   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629102   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.629125   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629277   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.631206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631478   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.631507   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631632   50624 provision.go:138] copyHostCerts
	I1207 21:15:39.631682   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:15:39.631698   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:15:39.631763   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:15:39.631844   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:15:39.631852   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:15:39.631874   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:15:39.631922   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:15:39.631928   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:15:39.631951   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:15:39.631993   50624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.embed-certs-598346 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-598346]
	I1207 21:15:39.968036   50624 provision.go:172] copyRemoteCerts
	I1207 21:15:39.968098   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:15:39.968121   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.970937   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971356   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.971386   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971627   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.971847   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.972010   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.972148   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.060156   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:15:40.082673   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 21:15:40.104263   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:15:40.125974   50624 provision.go:86] duration metric: configureAuth took 500.211549ms
	I1207 21:15:40.126012   50624 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:15:40.126233   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:15:40.126317   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.129108   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129484   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.129505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129662   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.129884   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130039   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130197   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.130358   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.130677   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.130698   50624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:15:40.439407   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:15:40.439438   50624 machine.go:91] provisioned docker machine in 1.071729841s
	I1207 21:15:40.439451   50624 start.go:300] post-start starting for "embed-certs-598346" (driver="kvm2")
	I1207 21:15:40.439465   50624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:15:40.439504   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.439827   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:15:40.439860   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.442750   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443135   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.443160   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443400   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.443623   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.443811   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.443974   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.531350   50624 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:15:40.535614   50624 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:15:40.535644   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:15:40.535720   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:15:40.535813   50624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:15:40.535938   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:15:40.543981   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:40.566714   50624 start.go:303] post-start completed in 127.248268ms
	I1207 21:15:40.566739   50624 fix.go:56] fixHost completed within 19.496310567s
	I1207 21:15:40.566763   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.569439   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569774   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.569791   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569915   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.570085   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570257   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570386   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.570534   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.570842   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.570855   50624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:15:40.686455   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983740.637211698
	
	I1207 21:15:40.686479   50624 fix.go:206] guest clock: 1701983740.637211698
	I1207 21:15:40.686486   50624 fix.go:219] Guest: 2023-12-07 21:15:40.637211698 +0000 UTC Remote: 2023-12-07 21:15:40.566742665 +0000 UTC m=+244.381466877 (delta=70.469033ms)
	I1207 21:15:40.686503   50624 fix.go:190] guest clock delta is within tolerance: 70.469033ms
	I1207 21:15:40.686508   50624 start.go:83] releasing machines lock for "embed-certs-598346", held for 19.61610992s
	I1207 21:15:40.686533   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.686809   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:40.689665   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690046   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.690069   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690242   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690903   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690988   50624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:15:40.691035   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.691162   50624 ssh_runner.go:195] Run: cat /version.json
	I1207 21:15:40.691196   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.693712   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.693943   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694078   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694106   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694269   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694295   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694333   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694419   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694501   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694580   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694742   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.694816   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694925   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.801618   50624 ssh_runner.go:195] Run: systemctl --version
	I1207 21:15:40.807496   50624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:15:40.967288   50624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:15:40.974223   50624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:15:40.974315   50624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:15:40.988391   50624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:15:40.988418   50624 start.go:475] detecting cgroup driver to use...
	I1207 21:15:40.988510   50624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:15:41.002379   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:15:41.016074   50624 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:15:41.016125   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:15:41.031096   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:15:41.044808   50624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:15:41.150630   50624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:15:40.710656   51037 main.go:141] libmachine: (no-preload-950431) Calling .Start
	I1207 21:15:40.710832   51037 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:15:40.711509   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:15:40.711813   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:15:40.712201   51037 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:15:40.712860   51037 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:15:41.269009   50624 docker.go:219] disabling docker service ...
	I1207 21:15:41.269067   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:15:41.281800   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:15:41.293694   50624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:15:41.413774   50624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:15:41.523960   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:15:41.536474   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:15:41.553611   50624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:15:41.553668   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.562741   50624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:15:41.562831   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.571841   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.580887   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.590259   50624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:15:41.599349   50624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:15:41.607259   50624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:15:41.607314   50624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:15:41.619425   50624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:15:41.627826   50624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:15:41.736815   50624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:15:41.896418   50624 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:15:41.896505   50624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:15:41.901539   50624 start.go:543] Will wait 60s for crictl version
	I1207 21:15:41.901598   50624 ssh_runner.go:195] Run: which crictl
	I1207 21:15:41.905454   50624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:15:41.942196   50624 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:15:41.942267   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:41.986024   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:42.034806   50624 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:15:42.036352   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:42.039304   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039704   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:42.039745   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039930   50624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1207 21:15:42.043951   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:42.056473   50624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:15:42.056535   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:42.099359   50624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:15:42.099459   50624 ssh_runner.go:195] Run: which lz4
	I1207 21:15:42.103324   50624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:15:42.107440   50624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:15:42.107476   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:15:44.063941   50624 crio.go:444] Took 1.960653 seconds to copy over tarball
	I1207 21:15:44.064018   50624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:15:41.955586   51037 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:15:41.956530   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:41.956967   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:41.957004   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:41.956919   51634 retry.go:31] will retry after 266.143384ms: waiting for machine to come up
	I1207 21:15:42.224547   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.225112   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.225142   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.225060   51634 retry.go:31] will retry after 314.364486ms: waiting for machine to come up
	I1207 21:15:42.540722   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.541264   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.541294   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.541225   51634 retry.go:31] will retry after 447.845741ms: waiting for machine to come up
	I1207 21:15:42.990858   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.991283   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.991310   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.991246   51634 retry.go:31] will retry after 494.509595ms: waiting for machine to come up
	I1207 21:15:43.487745   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:43.488268   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:43.488305   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:43.488218   51634 retry.go:31] will retry after 517.471464ms: waiting for machine to come up
	I1207 21:15:44.007846   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.008291   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.008322   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.008247   51634 retry.go:31] will retry after 755.53339ms: waiting for machine to come up
	I1207 21:15:44.765367   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.765799   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.765827   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.765743   51634 retry.go:31] will retry after 947.674862ms: waiting for machine to come up
	I1207 21:15:45.715436   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:45.715859   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:45.715890   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:45.715811   51634 retry.go:31] will retry after 1.304063218s: waiting for machine to come up
	I1207 21:15:47.049597   50624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.985550761s)
	I1207 21:15:47.049622   50624 crio.go:451] Took 2.985655 seconds to extract the tarball
	I1207 21:15:47.049632   50624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:15:47.089358   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:47.145982   50624 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:15:47.146007   50624 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:15:47.146069   50624 ssh_runner.go:195] Run: crio config
	I1207 21:15:47.205864   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:15:47.205888   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:15:47.205904   50624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:15:47.205933   50624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-598346 NodeName:embed-certs-598346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:15:47.206106   50624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-598346"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:15:47.206189   50624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-598346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:15:47.206249   50624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:15:47.214998   50624 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:15:47.215065   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:15:47.223252   50624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 21:15:47.239698   50624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:15:47.258476   50624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1207 21:15:47.275957   50624 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1207 21:15:47.279689   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:47.295204   50624 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346 for IP: 192.168.72.180
	I1207 21:15:47.295234   50624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:15:47.295391   50624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:15:47.295436   50624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:15:47.295501   50624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/client.key
	I1207 21:15:47.295552   50624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key.379caec1
	I1207 21:15:47.295589   50624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key
	I1207 21:15:47.295686   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:15:47.295712   50624 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:15:47.295722   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:15:47.295748   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:15:47.295772   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:15:47.295795   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:15:47.295835   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:47.296438   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:15:47.324057   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:15:47.350921   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:15:47.378603   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:15:47.405443   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:15:47.429942   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:15:47.455437   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:15:47.478735   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:15:47.503326   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:15:47.525886   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:15:47.549414   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:15:47.572018   50624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:15:47.590990   50624 ssh_runner.go:195] Run: openssl version
	I1207 21:15:47.597874   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:15:47.610087   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615875   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615949   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.622941   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:15:47.632217   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:15:47.641323   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645877   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645955   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.651452   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:15:47.660848   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:15:47.670225   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674620   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674670   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.680118   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:15:47.689444   50624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:15:47.693775   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:15:47.699741   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:15:47.705442   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:15:47.710938   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:15:47.716367   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:15:47.721958   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:15:47.727403   50624 kubeadm.go:404] StartCluster: {Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:15:47.727520   50624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:15:47.727599   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:47.771682   50624 cri.go:89] found id: ""
	I1207 21:15:47.771763   50624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:15:47.782923   50624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:15:47.782946   50624 kubeadm.go:636] restartCluster start
	I1207 21:15:47.783020   50624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:15:47.791494   50624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.792645   50624 kubeconfig.go:92] found "embed-certs-598346" server: "https://192.168.72.180:8443"
	I1207 21:15:47.794953   50624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:15:47.804014   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.804096   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.815412   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.815433   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.815503   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.825646   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.326356   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.326438   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.338771   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.826334   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.826405   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.837498   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.325998   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.326084   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.338197   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.825701   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.825821   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.842649   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.326181   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.326277   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.341560   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.826087   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.826183   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.841186   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.021061   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:47.021495   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:47.021519   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:47.021459   51634 retry.go:31] will retry after 1.183999845s: waiting for machine to come up
	I1207 21:15:48.206768   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:48.207222   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:48.207250   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:48.207183   51634 retry.go:31] will retry after 1.595211966s: waiting for machine to come up
	I1207 21:15:49.804832   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:49.805298   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:49.805328   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:49.805229   51634 retry.go:31] will retry after 2.126345359s: waiting for machine to come up
	I1207 21:15:51.325994   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.326083   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.338573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.826180   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.826253   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.837573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.326115   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.326192   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.336984   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.826590   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.826681   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.837678   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.326205   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.326279   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.337579   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.826047   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.826145   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.840263   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.325765   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.325842   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.337452   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.825969   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.826063   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.837428   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.325968   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.326060   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.337128   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.826749   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.826832   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.838002   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.933915   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:51.934338   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:51.934372   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:51.934279   51634 retry.go:31] will retry after 2.448139802s: waiting for machine to come up
	I1207 21:15:54.384038   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:54.384399   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:54.384425   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:54.384351   51634 retry.go:31] will retry after 3.211975182s: waiting for machine to come up
	I1207 21:15:56.325893   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.326007   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.337698   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:56.825827   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.825964   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.836945   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.326560   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:57.326637   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:57.337299   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.804902   50624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:15:57.804933   50624 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:15:57.804946   50624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:15:57.805023   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:57.846788   50624 cri.go:89] found id: ""
	I1207 21:15:57.846877   50624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:15:57.861513   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:15:57.869730   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:15:57.869781   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877777   50624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877801   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:57.992244   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:58.878385   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.051985   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.136414   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.232261   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:15:59.232358   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.246262   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.760617   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.260132   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.760723   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:57.599056   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:57.599417   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:57.599444   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:57.599382   51634 retry.go:31] will retry after 5.532381184s: waiting for machine to come up
	I1207 21:16:04.442905   51113 start.go:369] acquired machines lock for "default-k8s-diff-port-275828" in 3m9.513966804s
	I1207 21:16:04.442972   51113 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:04.442985   51113 fix.go:54] fixHost starting: 
	I1207 21:16:04.443390   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:04.443434   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:04.460087   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1207 21:16:04.460495   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:04.460991   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:04.461014   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:04.461405   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:04.461582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:04.461705   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:04.463304   51113 fix.go:102] recreateIfNeeded on default-k8s-diff-port-275828: state=Stopped err=<nil>
	I1207 21:16:04.463337   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	W1207 21:16:04.463494   51113 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:04.465895   51113 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-275828" ...
	I1207 21:16:04.467328   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Start
	I1207 21:16:04.467485   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring networks are active...
	I1207 21:16:04.468206   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network default is active
	I1207 21:16:04.468581   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network mk-default-k8s-diff-port-275828 is active
	I1207 21:16:04.468943   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Getting domain xml...
	I1207 21:16:04.469483   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Creating domain...
	I1207 21:16:03.134233   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134762   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134794   51037 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:16:03.134811   51037 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:16:03.135186   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.135209   51037 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:16:03.135230   51037 main.go:141] libmachine: (no-preload-950431) DBG | skip adding static IP to network mk-no-preload-950431 - found existing host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"}
	I1207 21:16:03.135251   51037 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:16:03.135265   51037 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:16:03.137331   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137662   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.137689   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137792   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:16:03.137817   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:16:03.137854   51037 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:03.137871   51037 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:16:03.137890   51037 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:16:03.229593   51037 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:03.230019   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:16:03.230604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.233069   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233426   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.233462   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233661   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:16:03.233837   51037 machine.go:88] provisioning docker machine ...
	I1207 21:16:03.233855   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:03.234081   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234254   51037 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:16:03.234277   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234386   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.236593   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.236859   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.236892   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.237079   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.237243   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237396   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237522   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.237653   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.238000   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.238016   51037 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:16:03.374959   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:16:03.374999   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.377825   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378212   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.378247   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378389   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.378604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378763   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.379041   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.379363   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.379399   51037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:03.510050   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:03.510081   51037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:03.510109   51037 buildroot.go:174] setting up certificates
	I1207 21:16:03.510119   51037 provision.go:83] configureAuth start
	I1207 21:16:03.510130   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.510367   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.512754   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513120   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.513151   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513289   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.515546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.515894   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.515947   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.516066   51037 provision.go:138] copyHostCerts
	I1207 21:16:03.516119   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:03.516138   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:03.516206   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:03.516294   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:03.516303   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:03.516328   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:03.516398   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:03.516406   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:03.516430   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:03.516480   51037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:16:03.662663   51037 provision.go:172] copyRemoteCerts
	I1207 21:16:03.662732   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:03.662756   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.665043   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665344   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.665379   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.665713   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.665887   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.666049   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:03.757956   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:03.782348   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:16:03.806388   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:16:03.831058   51037 provision.go:86] duration metric: configureAuth took 320.927373ms
	I1207 21:16:03.831086   51037 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:03.831264   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:16:03.831365   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.834104   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834489   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.834535   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834703   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.834901   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835087   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835224   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.835370   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.835699   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.835721   51037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:04.154758   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:04.154783   51037 machine.go:91] provisioned docker machine in 920.933844ms
	I1207 21:16:04.154795   51037 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:16:04.154810   51037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:04.154829   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.155148   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:04.155173   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.157776   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158131   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.158163   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158336   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.158560   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.158733   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.158873   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.258325   51037 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:04.262930   51037 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:04.262950   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:04.263011   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:04.263077   51037 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:04.263177   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:04.271602   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:04.303816   51037 start.go:303] post-start completed in 148.990598ms
	I1207 21:16:04.303849   51037 fix.go:56] fixHost completed within 23.617201529s
	I1207 21:16:04.303873   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.306576   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.306930   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.306962   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.307104   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.307326   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307458   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307591   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.307773   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:04.308242   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:04.308260   51037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:04.442724   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983764.388433819
	
	I1207 21:16:04.442748   51037 fix.go:206] guest clock: 1701983764.388433819
	I1207 21:16:04.442757   51037 fix.go:219] Guest: 2023-12-07 21:16:04.388433819 +0000 UTC Remote: 2023-12-07 21:16:04.303852803 +0000 UTC m=+192.597462932 (delta=84.581016ms)
	I1207 21:16:04.442797   51037 fix.go:190] guest clock delta is within tolerance: 84.581016ms
	I1207 21:16:04.442801   51037 start.go:83] releasing machines lock for "no-preload-950431", held for 23.756181397s
	I1207 21:16:04.442827   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.443065   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:04.446137   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446578   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.446612   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446797   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447413   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447656   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447732   51037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:04.447783   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.447902   51037 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:04.447923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.450882   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451025   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451253   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451280   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451470   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451481   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451507   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451654   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.451720   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452043   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.452098   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.452561   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452761   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.565982   51037 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:04.573821   51037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:04.741571   51037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:04.749951   51037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:04.750038   51037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:04.770148   51037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:04.770176   51037 start.go:475] detecting cgroup driver to use...
	I1207 21:16:04.770244   51037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:04.787798   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:04.802346   51037 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:04.802415   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:04.819638   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:04.836910   51037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:04.947330   51037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:05.087698   51037 docker.go:219] disabling docker service ...
	I1207 21:16:05.087794   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:05.104790   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:05.122187   51037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:05.252225   51037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:05.394598   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:05.408596   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:05.429804   51037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:05.429876   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.441617   51037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:05.441700   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.452787   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.462684   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.472827   51037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:05.485493   51037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:05.495282   51037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:05.495367   51037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:05.512972   51037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:05.523817   51037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:05.674940   51037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:05.866827   51037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:05.866913   51037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:05.873044   51037 start.go:543] Will wait 60s for crictl version
	I1207 21:16:05.873109   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:05.878484   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:05.919888   51037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:05.919979   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:05.976795   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:06.034745   51037 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:16:01.260865   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.760580   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.790951   50624 api_server.go:72] duration metric: took 2.55868777s to wait for apiserver process to appear ...
	I1207 21:16:01.790981   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:01.791000   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.338427   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:05.338467   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:05.338483   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.436356   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.436385   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:05.937143   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.943626   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.943656   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.036269   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:06.039546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.039919   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:06.039968   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.040205   51037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:06.044899   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:06.061053   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:16:06.061106   51037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:06.099113   51037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:16:06.099136   51037 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:06.099196   51037 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.099225   51037 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.099246   51037 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:16:06.099283   51037 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.099314   51037 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.099229   51037 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.099419   51037 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.099484   51037 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100960   51037 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.100961   51037 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.101035   51037 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.100973   51037 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.234869   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.272014   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.275605   51037 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:16:06.275659   51037 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.275716   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.295068   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.329385   51037 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:16:06.329435   51037 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.329449   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.329486   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.356701   51037 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:16:06.356744   51037 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.356790   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.382536   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:16:06.389671   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.391917   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.399801   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.399908   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.399980   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.400067   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.409081   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.616824   51037 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:16:06.616864   51037 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:16:06.616876   51037 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.616884   51037 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.616923   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.616930   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.617038   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617075   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1207 21:16:06.617086   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617114   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617122   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617199   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:16:06.617272   51037 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:16:06.617286   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:06.617305   51037 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.617353   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.631975   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.632094   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1207 21:16:06.632181   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.436900   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.457077   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:06.457122   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.936534   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.943658   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:16:06.952206   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:06.952239   50624 api_server.go:131] duration metric: took 5.161250619s to wait for apiserver health ...
	I1207 21:16:06.952251   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:16:06.952259   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:06.954179   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:05.844251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting to get IP...
	I1207 21:16:05.845419   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845793   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845896   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:05.845790   51802 retry.go:31] will retry after 224.053393ms: waiting for machine to come up
	I1207 21:16:06.071071   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071521   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071545   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.071464   51802 retry.go:31] will retry after 272.776477ms: waiting for machine to come up
	I1207 21:16:06.346126   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346739   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346773   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.346683   51802 retry.go:31] will retry after 373.022784ms: waiting for machine to come up
	I1207 21:16:06.721567   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722089   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722115   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.722029   51802 retry.go:31] will retry after 380.100559ms: waiting for machine to come up
	I1207 21:16:07.103408   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103853   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103884   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.103798   51802 retry.go:31] will retry after 473.24776ms: waiting for machine to come up
	I1207 21:16:07.578548   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579087   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.579176   51802 retry.go:31] will retry after 892.826082ms: waiting for machine to come up
	I1207 21:16:08.473531   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474027   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474058   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:08.473989   51802 retry.go:31] will retry after 1.042648737s: waiting for machine to come up
	I1207 21:16:09.518823   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519363   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:09.519213   51802 retry.go:31] will retry after 948.481622ms: waiting for machine to come up
	I1207 21:16:06.955727   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:06.967724   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:06.990163   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:07.001387   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:07.001425   50624 system_pods.go:61] "coredns-5dd5756b68-hlpsb" [c1f9f7db-0741-483c-9e39-d6f0ce4715d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:07.001436   50624 system_pods.go:61] "etcd-embed-certs-598346" [acda3700-87a2-4442-94e6-1d17288e7cee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:07.001446   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [e1439056-061b-4add-a399-c55a816fba70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:07.001456   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [b4c80c36-da2c-4c46-b655-3c6bb2a96ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:07.001466   50624 system_pods.go:61] "kube-proxy-jqhnn" [e2635205-e67a-4b56-a7b4-82fe97b5fe7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:07.001490   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [3b90e1d4-9c0f-46e4-a7b7-5e42717a8b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:07.001499   50624 system_pods.go:61] "metrics-server-57f55c9bc5-sndh4" [9a052ce0-760f-4cfd-a958-971daa14ea02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:07.001511   50624 system_pods.go:61] "storage-provisioner" [bf244954-a1d7-4b51-9085-387e60d02792] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:07.001524   50624 system_pods.go:74] duration metric: took 11.336763ms to wait for pod list to return data ...
	I1207 21:16:07.001538   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:07.007697   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:07.007737   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:07.007752   50624 node_conditions.go:105] duration metric: took 6.207447ms to run NodePressure ...
	I1207 21:16:07.007770   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:07.287760   50624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297260   50624 kubeadm.go:787] kubelet initialised
	I1207 21:16:07.297285   50624 kubeadm.go:788] duration metric: took 9.495153ms waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297296   50624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:07.304800   50624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.313488   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313523   50624 pod_ready.go:81] duration metric: took 8.689063ms waiting for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.313535   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313545   50624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.321603   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321637   50624 pod_ready.go:81] duration metric: took 8.078752ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.321649   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321658   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.333040   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333068   50624 pod_ready.go:81] duration metric: took 11.399287ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.333081   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333089   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.397606   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397632   50624 pod_ready.go:81] duration metric: took 64.53373ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.397642   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397648   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713161   50624 pod_ready.go:92] pod "kube-proxy-jqhnn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:08.713188   50624 pod_ready.go:81] duration metric: took 1.315530906s waiting for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713201   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:10.919896   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:07.059825   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061030   51037 ssh_runner.go:235] Completed: which crictl: (3.443650725s)
	I1207 21:16:10.061121   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:10.061130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (3.443992158s)
	I1207 21:16:10.061160   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1207 21:16:10.061174   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.444033736s)
	I1207 21:16:10.061199   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:16:10.061225   51037 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061245   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1: (3.429236441s)
	I1207 21:16:10.061286   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061294   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061296   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.429094571s)
	I1207 21:16:10.061330   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:16:10.061346   51037 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.001491955s)
	I1207 21:16:10.061361   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061387   51037 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:16:10.061402   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:10.061430   51037 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061469   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:10.469685   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470224   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:10.470187   51802 retry.go:31] will retry after 1.846436384s: waiting for machine to come up
	I1207 21:16:12.319116   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319558   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319590   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:12.319512   51802 retry.go:31] will retry after 1.415005437s: waiting for machine to come up
	I1207 21:16:13.736082   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736599   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736630   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:13.736533   51802 retry.go:31] will retry after 2.499952402s: waiting for machine to come up
	I1207 21:16:13.413966   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:15.414181   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:14.287122   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.225788884s)
	I1207 21:16:14.287166   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1207 21:16:14.287165   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: (4.226018563s)
	I1207 21:16:14.287190   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.287204   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.225706156s)
	I1207 21:16:14.287208   51037 ssh_runner.go:235] Completed: which crictl: (4.225716226s)
	I1207 21:16:14.287294   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287310   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (4.225934747s)
	I1207 21:16:14.287322   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1207 21:16:14.287325   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:14.287270   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1207 21:16:14.287238   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.338957   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:16:14.339087   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:16.589704   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.302291312s)
	I1207 21:16:16.589740   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:16:16.589764   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589777   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.302463063s)
	I1207 21:16:16.589816   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589817   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1207 21:16:16.589887   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.250737859s)
	I1207 21:16:16.589912   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1207 21:16:16.238979   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239340   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239367   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:16.239304   51802 retry.go:31] will retry after 2.478988074s: waiting for machine to come up
	I1207 21:16:18.720359   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720892   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720925   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:18.720840   51802 retry.go:31] will retry after 4.119588433s: waiting for machine to come up
	I1207 21:16:17.913477   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.407386   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:18.407417   50624 pod_ready.go:81] duration metric: took 9.694207323s waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:18.407431   50624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:20.429952   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.142546   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (1.552699587s)
	I1207 21:16:18.142620   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:16:18.142658   51037 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:18.142737   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:20.432330   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.289556402s)
	I1207 21:16:20.432358   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:16:20.432386   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:20.432436   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:22.843120   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843516   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843540   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:22.843470   51802 retry.go:31] will retry after 3.969701228s: waiting for machine to come up
	I1207 21:16:22.431295   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:24.929166   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:22.891954   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.459495307s)
	I1207 21:16:22.891978   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1207 21:16:22.892001   51037 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:22.892056   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:23.742939   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 21:16:23.743011   51037 cache_images.go:123] Successfully loaded all cached images
	I1207 21:16:23.743021   51037 cache_images.go:92] LoadImages completed in 17.643875393s
	I1207 21:16:23.743107   51037 ssh_runner.go:195] Run: crio config
	I1207 21:16:23.802064   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:23.802087   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:23.802106   51037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:23.802128   51037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950431 NodeName:no-preload-950431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:23.802258   51037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:23.802329   51037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-950431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:23.802382   51037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:16:23.813052   51037 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:23.813143   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:23.823249   51037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1207 21:16:23.840999   51037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:16:23.857599   51037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1207 21:16:23.873664   51037 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:23.877208   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:23.888109   51037 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431 for IP: 192.168.50.100
	I1207 21:16:23.888148   51037 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:23.888298   51037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:23.888333   51037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:23.888394   51037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:16:23.888453   51037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key.8f36cd02
	I1207 21:16:23.888490   51037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key
	I1207 21:16:23.888598   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:23.888626   51037 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:23.888638   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:23.888669   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:23.888701   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:23.888725   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:23.888769   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:23.889405   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:23.911313   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:16:23.935796   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:23.960576   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:23.983952   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:24.005755   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:24.027232   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:24.049398   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:24.073975   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:24.097326   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:24.118396   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:24.140590   51037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:24.157287   51037 ssh_runner.go:195] Run: openssl version
	I1207 21:16:24.163079   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:24.173618   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.177973   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.178038   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.183537   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:24.193750   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:24.203836   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208278   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208324   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.213906   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:24.223939   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:24.234037   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238379   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238443   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.243650   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:24.253904   51037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:24.258343   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:24.264011   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:24.269609   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:24.275294   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:24.280969   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:24.286763   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:24.292414   51037 kubeadm.go:404] StartCluster: {Name:no-preload-950431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:24.292505   51037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:24.292565   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:24.342426   51037 cri.go:89] found id: ""
	I1207 21:16:24.342596   51037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:24.353900   51037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:24.353939   51037 kubeadm.go:636] restartCluster start
	I1207 21:16:24.353999   51037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:24.363465   51037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.364722   51037 kubeconfig.go:92] found "no-preload-950431" server: "https://192.168.50.100:8443"
	I1207 21:16:24.367198   51037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:24.378918   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.378971   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.391331   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.391354   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.391393   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.403003   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.903722   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.903814   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.915891   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.403459   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.403568   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.415677   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.903683   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.903765   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.915474   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:26.403146   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.403258   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.414072   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.031043   50270 start.go:369] acquired machines lock for "old-k8s-version-483745" in 1m1.958159244s
	I1207 21:16:28.031117   50270 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:28.031127   50270 fix.go:54] fixHost starting: 
	I1207 21:16:28.031477   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:28.031504   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:28.047757   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1207 21:16:28.048134   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:28.048598   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:16:28.048628   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:28.048962   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:28.049123   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:28.049278   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:16:28.050698   50270 fix.go:102] recreateIfNeeded on old-k8s-version-483745: state=Stopped err=<nil>
	I1207 21:16:28.050716   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	W1207 21:16:28.050943   50270 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:28.053462   50270 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-483745" ...
	I1207 21:16:28.054995   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Start
	I1207 21:16:28.055169   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring networks are active...
	I1207 21:16:28.055803   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network default is active
	I1207 21:16:28.056167   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network mk-old-k8s-version-483745 is active
	I1207 21:16:28.056613   50270 main.go:141] libmachine: (old-k8s-version-483745) Getting domain xml...
	I1207 21:16:28.057267   50270 main.go:141] libmachine: (old-k8s-version-483745) Creating domain...
	I1207 21:16:26.815724   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816306   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Found IP for machine: 192.168.39.254
	I1207 21:16:26.816346   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserving static IP address...
	I1207 21:16:26.816373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has current primary IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816843   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.816874   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserved static IP address: 192.168.39.254
	I1207 21:16:26.816895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | skip adding static IP to network mk-default-k8s-diff-port-275828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"}
	I1207 21:16:26.816916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Getting to WaitForSSH function...
	I1207 21:16:26.816933   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for SSH to be available...
	I1207 21:16:26.819265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819625   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.819654   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819808   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH client type: external
	I1207 21:16:26.819840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa (-rw-------)
	I1207 21:16:26.819880   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:26.819908   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | About to run SSH command:
	I1207 21:16:26.819930   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | exit 0
	I1207 21:16:26.913932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:26.914232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetConfigRaw
	I1207 21:16:26.915043   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:26.917486   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.917899   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.917944   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.918182   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:16:26.918360   51113 machine.go:88] provisioning docker machine ...
	I1207 21:16:26.918380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:26.918587   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918775   51113 buildroot.go:166] provisioning hostname "default-k8s-diff-port-275828"
	I1207 21:16:26.918805   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918971   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:26.921227   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921482   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.921515   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921657   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:26.921818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922006   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922162   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:26.922317   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:26.922695   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:26.922713   51113 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-275828 && echo "default-k8s-diff-port-275828" | sudo tee /etc/hostname
	I1207 21:16:27.066745   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-275828
	
	I1207 21:16:27.066778   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.069493   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.069842   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.069895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.070078   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.070295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070446   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.070824   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.071271   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.071302   51113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-275828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-275828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-275828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:27.206475   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:27.206503   51113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:27.206534   51113 buildroot.go:174] setting up certificates
	I1207 21:16:27.206545   51113 provision.go:83] configureAuth start
	I1207 21:16:27.206553   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:27.206818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:27.209295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209632   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.209666   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209763   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.211882   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212147   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.212176   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212250   51113 provision.go:138] copyHostCerts
	I1207 21:16:27.212306   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:27.212326   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:27.212396   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:27.212501   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:27.212511   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:27.212540   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:27.212617   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:27.212627   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:27.212656   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:27.212728   51113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-275828 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube default-k8s-diff-port-275828]
	I1207 21:16:27.273212   51113 provision.go:172] copyRemoteCerts
	I1207 21:16:27.273291   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:27.273321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.275905   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276185   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.276219   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.276569   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.276703   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.276814   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.371834   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:27.394096   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 21:16:27.416619   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:27.443103   51113 provision.go:86] duration metric: configureAuth took 236.548224ms
	I1207 21:16:27.443127   51113 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:27.443336   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:27.443406   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.446005   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446303   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.446334   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446477   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.446648   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446959   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.447158   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.447600   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.447623   51113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:27.760539   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:27.760582   51113 machine.go:91] provisioned docker machine in 842.207987ms
	I1207 21:16:27.760608   51113 start.go:300] post-start starting for "default-k8s-diff-port-275828" (driver="kvm2")
	I1207 21:16:27.760617   51113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:27.760633   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:27.760993   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:27.761030   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.763527   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.763923   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.763968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.764077   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.764254   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.764386   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.764559   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.860772   51113 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:27.865258   51113 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:27.865285   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:27.865348   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:27.865422   51113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:27.865537   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:27.874901   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:27.896890   51113 start.go:303] post-start completed in 136.257327ms
	I1207 21:16:27.896912   51113 fix.go:56] fixHost completed within 23.453929111s
	I1207 21:16:27.896932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.899422   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899740   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.899780   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.900104   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900400   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.900601   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.900920   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.900935   51113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:28.030917   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983787.976128099
	
	I1207 21:16:28.030936   51113 fix.go:206] guest clock: 1701983787.976128099
	I1207 21:16:28.030943   51113 fix.go:219] Guest: 2023-12-07 21:16:27.976128099 +0000 UTC Remote: 2023-12-07 21:16:27.896915587 +0000 UTC m=+213.119643923 (delta=79.212512ms)
	I1207 21:16:28.030970   51113 fix.go:190] guest clock delta is within tolerance: 79.212512ms
	I1207 21:16:28.030975   51113 start.go:83] releasing machines lock for "default-k8s-diff-port-275828", held for 23.588040931s
	I1207 21:16:28.031003   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.031255   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:28.033864   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034277   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.034318   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034501   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035101   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035283   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035354   51113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:28.035399   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.035519   51113 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:28.035543   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.038353   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038570   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038636   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.038675   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.038993   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039013   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.039035   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.039152   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039189   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.039319   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.039368   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039495   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039619   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.161850   51113 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:28.167540   51113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:28.311477   51113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:28.319102   51113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:28.319177   51113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:28.334118   51113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:28.334138   51113 start.go:475] detecting cgroup driver to use...
	I1207 21:16:28.334187   51113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:28.351563   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:28.364950   51113 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:28.365015   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:28.380367   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:28.396070   51113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:28.504230   51113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:28.634829   51113 docker.go:219] disabling docker service ...
	I1207 21:16:28.634893   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:28.648955   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:28.660615   51113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:28.781577   51113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:28.899307   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:28.912673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:28.931310   51113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:28.931384   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.941006   51113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:28.941083   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.951712   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.963062   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.973981   51113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:28.984828   51113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:28.993884   51113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:28.993992   51113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:29.007812   51113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:29.017781   51113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:29.147958   51113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:29.329720   51113 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:29.329781   51113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:29.336048   51113 start.go:543] Will wait 60s for crictl version
	I1207 21:16:29.336109   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:16:29.340075   51113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:29.378207   51113 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:29.378289   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.438034   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.487899   51113 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:16:29.489336   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:29.492387   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.492824   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:29.492858   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.493105   51113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:29.497882   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:29.510857   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:16:29.510910   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:29.557513   51113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:16:29.557590   51113 ssh_runner.go:195] Run: which lz4
	I1207 21:16:29.561849   51113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:29.566351   51113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:29.566383   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:16:26.930512   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:29.442726   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:26.903645   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.903716   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.915728   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.403874   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.403939   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.415501   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.904082   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.904150   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.916404   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.404050   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.404143   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.416757   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.903144   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.903202   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.914709   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.403236   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.403324   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.415595   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.903823   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.903908   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.920093   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.403786   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.403864   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.417374   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.903246   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.903335   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.916333   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:31.403909   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.403984   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.418792   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.352362   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting to get IP...
	I1207 21:16:29.353395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.353871   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.353965   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.353847   51971 retry.go:31] will retry after 307.502031ms: waiting for machine to come up
	I1207 21:16:29.663412   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.663958   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.663990   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.663889   51971 retry.go:31] will retry after 328.013518ms: waiting for machine to come up
	I1207 21:16:29.993550   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.994129   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.994160   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.994066   51971 retry.go:31] will retry after 315.323859ms: waiting for machine to come up
	I1207 21:16:30.310570   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.311106   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.311139   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.311055   51971 retry.go:31] will retry after 547.317149ms: waiting for machine to come up
	I1207 21:16:30.859753   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.860500   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.860532   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.860479   51971 retry.go:31] will retry after 591.81737ms: waiting for machine to come up
	I1207 21:16:31.453939   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:31.454481   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:31.454508   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:31.454426   51971 retry.go:31] will retry after 818.736684ms: waiting for machine to come up
	I1207 21:16:32.274582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:32.275065   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:32.275100   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:32.275018   51971 retry.go:31] will retry after 865.865666ms: waiting for machine to come up
	I1207 21:16:33.142356   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:33.142713   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:33.142748   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:33.142655   51971 retry.go:31] will retry after 1.270743306s: waiting for machine to come up
	I1207 21:16:31.473652   51113 crio.go:444] Took 1.911834 seconds to copy over tarball
	I1207 21:16:31.473729   51113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:34.448164   51113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974406678s)
	I1207 21:16:34.448185   51113 crio.go:451] Took 2.974507 seconds to extract the tarball
	I1207 21:16:34.448196   51113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:34.493579   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:34.555669   51113 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:16:34.555694   51113 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:16:34.555760   51113 ssh_runner.go:195] Run: crio config
	I1207 21:16:34.637813   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:34.637855   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:34.637874   51113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:34.637909   51113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-275828 NodeName:default-k8s-diff-port-275828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:34.638088   51113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-275828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:34.638186   51113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-275828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1207 21:16:34.638255   51113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:16:34.651147   51113 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:34.651264   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:34.660855   51113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1207 21:16:34.678841   51113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:34.696338   51113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1207 21:16:34.718058   51113 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:34.722640   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:34.737097   51113 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828 for IP: 192.168.39.254
	I1207 21:16:34.737138   51113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:34.737316   51113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:34.737367   51113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:34.737459   51113 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.key
	I1207 21:16:34.737557   51113 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key.9e1cae77
	I1207 21:16:34.737614   51113 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key
	I1207 21:16:34.737745   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:34.737783   51113 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:34.737799   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:34.737835   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:34.737870   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:34.737904   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:34.737976   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:34.738542   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:34.768389   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:34.801112   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:31.931027   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:34.430620   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:31.903642   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.903781   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.919330   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.403857   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.403949   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.419078   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.903477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.903561   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.918946   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.403477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.403605   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.416411   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.903561   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.903690   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.915554   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:34.379314   51037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:34.379347   51037 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:34.379361   51037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:34.379450   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:34.427182   51037 cri.go:89] found id: ""
	I1207 21:16:34.427255   51037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:34.448141   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:34.462411   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:34.462494   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474410   51037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474442   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:34.646144   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.548212   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.745964   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.818060   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.899490   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:35.899616   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:35.916336   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:36.432466   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:34.415333   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:34.415908   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:34.415935   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:34.415819   51971 retry.go:31] will retry after 1.846003214s: waiting for machine to come up
	I1207 21:16:36.262900   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:36.263321   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:36.263343   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:36.263283   51971 retry.go:31] will retry after 1.858599877s: waiting for machine to come up
	I1207 21:16:38.124144   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:38.124669   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:38.124701   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:38.124622   51971 retry.go:31] will retry after 2.443451278s: waiting for machine to come up
	I1207 21:16:34.830966   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:35.094040   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:35.121234   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:35.148659   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:35.176938   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:35.206320   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:35.234907   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:35.261034   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:35.286500   51113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:35.306742   51113 ssh_runner.go:195] Run: openssl version
	I1207 21:16:35.314676   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:35.325752   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332066   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332147   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.339606   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:35.350274   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:35.360328   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365516   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365593   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.371482   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:35.381328   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:35.391869   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.396986   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.397051   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.402939   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:35.413428   51113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:35.419598   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:35.427748   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:35.435492   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:35.442272   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:35.450180   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:35.459639   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:35.467615   51113 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:35.467736   51113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:35.467793   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:35.504593   51113 cri.go:89] found id: ""
	I1207 21:16:35.504685   51113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:35.514155   51113 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:35.514182   51113 kubeadm.go:636] restartCluster start
	I1207 21:16:35.514255   51113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:35.525515   51113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.526798   51113 kubeconfig.go:92] found "default-k8s-diff-port-275828" server: "https://192.168.39.254:8444"
	I1207 21:16:35.529447   51113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:35.540876   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.540934   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.555494   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.555519   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.555569   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.569455   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.069801   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.069903   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.083366   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.569984   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.570078   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.585387   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.069869   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.069980   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.086900   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.570490   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.570597   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.586215   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.069601   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.069709   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.084557   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.570194   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.570306   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.586686   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.070433   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.070518   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.088460   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.570579   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.570654   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.588478   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.785543   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:38.932981   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:36.932228   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.432719   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.932863   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.432661   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.932210   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.965380   51037 api_server.go:72] duration metric: took 3.065893789s to wait for apiserver process to appear ...
	I1207 21:16:38.965409   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:38.965425   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:40.571221   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:40.571824   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:40.571873   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:40.571774   51971 retry.go:31] will retry after 2.349695925s: waiting for machine to come up
	I1207 21:16:42.923107   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:42.923582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:42.923618   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:42.923549   51971 retry.go:31] will retry after 4.503894046s: waiting for machine to come up
	I1207 21:16:40.070126   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.070229   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.085086   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:40.570237   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.570329   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.584997   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.069554   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.069706   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.084654   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.570175   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.570260   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.581973   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.070546   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.070641   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.085859   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.570428   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.570534   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.585491   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.070017   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.070132   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.082461   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.569992   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.570093   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.585221   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.069681   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.069749   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.081499   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.569999   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.570083   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.585512   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.598644   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.598675   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:43.598689   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:43.649508   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.649553   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:44.150221   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.155890   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.155914   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:44.649610   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.655402   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.655437   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:45.150082   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:45.156432   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:16:45.172948   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:16:45.172983   51037 api_server.go:131] duration metric: took 6.207566234s to wait for apiserver health ...
	I1207 21:16:45.172996   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:45.173002   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:45.175018   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:41.430106   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:43.431417   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.932644   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.176436   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:45.231836   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:45.250256   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:45.270151   51037 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:45.270188   51037 system_pods.go:61] "coredns-76f75df574-qfwbr" [577161a0-8d68-41cc-88cd-1bd56e99b7aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:45.270198   51037 system_pods.go:61] "etcd-no-preload-950431" [8e49a6a7-c1e5-469d-9b30-c8e59471effb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:45.270210   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [15bc33db-995d-4102-9a2b-e991209c2946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:45.270220   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [c263b58e-2aea-455d-8b2f-8915f1c6e820] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:45.270232   51037 system_pods.go:61] "kube-proxy-mzv22" [96e51e2f-17be-4724-ae28-99dfa63e9976] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:45.270241   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [c040d573-c78f-4149-8be6-af33fc6ea186] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:45.270257   51037 system_pods.go:61] "metrics-server-57f55c9bc5-fv8x4" [ac03a70e-1059-474f-b6f6-5974f0900bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:45.270268   51037 system_pods.go:61] "storage-provisioner" [3f942481-221c-4e69-a876-f82676cde788] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:45.270279   51037 system_pods.go:74] duration metric: took 19.99813ms to wait for pod list to return data ...
	I1207 21:16:45.270291   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:45.274636   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:45.274667   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:45.274681   51037 node_conditions.go:105] duration metric: took 4.381452ms to run NodePressure ...
	I1207 21:16:45.274700   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.597857   51037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603394   51037 kubeadm.go:787] kubelet initialised
	I1207 21:16:45.603423   51037 kubeadm.go:788] duration metric: took 5.535827ms waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603432   51037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:45.612509   51037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:47.430850   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431364   50270 main.go:141] libmachine: (old-k8s-version-483745) Found IP for machine: 192.168.61.171
	I1207 21:16:47.431389   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserving static IP address...
	I1207 21:16:47.431415   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has current primary IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431791   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.431827   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | skip adding static IP to network mk-old-k8s-version-483745 - found existing host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"}
	I1207 21:16:47.431845   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserved static IP address: 192.168.61.171
	I1207 21:16:47.431866   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting for SSH to be available...
	I1207 21:16:47.431884   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Getting to WaitForSSH function...
	I1207 21:16:47.434071   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434391   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.434423   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434511   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH client type: external
	I1207 21:16:47.434548   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa (-rw-------)
	I1207 21:16:47.434590   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:47.434624   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | About to run SSH command:
	I1207 21:16:47.434642   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | exit 0
	I1207 21:16:47.529747   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:47.530150   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetConfigRaw
	I1207 21:16:47.530743   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.533361   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.533690   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.533728   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.534019   50270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:16:47.534201   50270 machine.go:88] provisioning docker machine ...
	I1207 21:16:47.534219   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:47.534379   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534549   50270 buildroot.go:166] provisioning hostname "old-k8s-version-483745"
	I1207 21:16:47.534578   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534793   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.537037   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537448   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.537482   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537621   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.537788   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.537963   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.538107   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.538276   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.538728   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.538751   50270 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-483745 && echo "old-k8s-version-483745" | sudo tee /etc/hostname
	I1207 21:16:47.694514   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-483745
	
	I1207 21:16:47.694552   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.697720   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698181   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.698217   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.698602   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698958   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.699158   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.699617   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.699646   50270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-483745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-483745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-483745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:47.851750   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:47.851781   50270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:47.851817   50270 buildroot.go:174] setting up certificates
	I1207 21:16:47.851830   50270 provision.go:83] configureAuth start
	I1207 21:16:47.851848   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.852181   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.855229   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855607   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.855633   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855891   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.858432   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.858811   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.858868   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.859066   50270 provision.go:138] copyHostCerts
	I1207 21:16:47.859126   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:47.859146   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:47.859211   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:47.859312   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:47.859322   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:47.859352   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:47.859426   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:47.859436   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:47.859465   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:47.859532   50270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-483745 san=[192.168.61.171 192.168.61.171 localhost 127.0.0.1 minikube old-k8s-version-483745]
	I1207 21:16:48.080700   50270 provision.go:172] copyRemoteCerts
	I1207 21:16:48.080764   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:48.080787   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.083799   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084261   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.084325   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084545   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.084752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.084874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.085025   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.188586   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:48.217051   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:16:48.245046   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:48.276344   50270 provision.go:86] duration metric: configureAuth took 424.496766ms
	I1207 21:16:48.276381   50270 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:48.276627   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:16:48.276720   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.280119   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280556   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.280627   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280943   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.281127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281312   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281452   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.281621   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.282136   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.282160   50270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:45.070516   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:45.070618   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:45.087880   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:45.541593   51113 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:45.541627   51113 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:45.541640   51113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:45.541714   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:45.589291   51113 cri.go:89] found id: ""
	I1207 21:16:45.589394   51113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:45.606397   51113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:45.616135   51113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:45.616192   51113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625661   51113 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625689   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.750072   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.619750   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.838835   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.935494   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:47.007474   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:47.007536   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.020817   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.536948   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.036982   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.537584   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.036899   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.537400   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.575582   51113 api_server.go:72] duration metric: took 2.568102787s to wait for apiserver process to appear ...
	I1207 21:16:49.575614   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:49.575636   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576140   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:49.576174   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576630   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:48.639642   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:48.639702   50270 machine.go:91] provisioned docker machine in 1.10547448s
	I1207 21:16:48.639715   50270 start.go:300] post-start starting for "old-k8s-version-483745" (driver="kvm2")
	I1207 21:16:48.639733   50270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:48.639772   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.640106   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:48.640136   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.643155   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643592   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.643625   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643897   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.644101   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.644253   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.644374   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.756527   50270 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:48.761976   50270 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:48.762042   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:48.762117   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:48.762229   50270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:48.762355   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:48.773495   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:48.802433   50270 start.go:303] post-start completed in 162.696963ms
	I1207 21:16:48.802464   50270 fix.go:56] fixHost completed within 20.771337135s
	I1207 21:16:48.802489   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.805389   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.805821   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.805853   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.806002   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.806221   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806361   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806516   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.806737   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.807177   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.807194   50270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:48.948515   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983808.895290650
	
	I1207 21:16:48.948602   50270 fix.go:206] guest clock: 1701983808.895290650
	I1207 21:16:48.948622   50270 fix.go:219] Guest: 2023-12-07 21:16:48.89529065 +0000 UTC Remote: 2023-12-07 21:16:48.802469186 +0000 UTC m=+365.320601213 (delta=92.821464ms)
	I1207 21:16:48.948679   50270 fix.go:190] guest clock delta is within tolerance: 92.821464ms
	I1207 21:16:48.948694   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 20.917606045s
	I1207 21:16:48.948726   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.948967   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:48.952007   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952392   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.952424   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952680   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953302   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953494   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953578   50270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:48.953633   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.953877   50270 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:48.953904   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.957083   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957288   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957631   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957656   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957798   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957849   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958105   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958110   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958284   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958443   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958665   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.958668   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:49.082678   50270 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:49.091075   50270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:49.250638   50270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:49.259237   50270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:49.259312   50270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:49.279490   50270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:49.279520   50270 start.go:475] detecting cgroup driver to use...
	I1207 21:16:49.279592   50270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:49.301129   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:49.317758   50270 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:49.317832   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:49.335384   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:49.352808   50270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:49.487177   50270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:49.622551   50270 docker.go:219] disabling docker service ...
	I1207 21:16:49.622632   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:49.641913   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:49.655046   50270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:49.780471   50270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:49.903816   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:49.917447   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:49.939101   50270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:16:49.939170   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.949112   50270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:49.949187   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.958706   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.968115   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.977516   50270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:49.987974   50270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:49.996996   50270 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:49.997069   50270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:50.009736   50270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:50.018888   50270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:50.136461   50270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:50.337931   50270 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:50.338013   50270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:50.344175   50270 start.go:543] Will wait 60s for crictl version
	I1207 21:16:50.344237   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:50.348418   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:50.387227   50270 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:50.387329   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.439820   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.492743   50270 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1207 21:16:48.431193   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.945823   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:47.635909   51037 pod_ready.go:102] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:49.635091   51037 pod_ready.go:92] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:49.635119   51037 pod_ready.go:81] duration metric: took 4.022584638s waiting for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:49.635139   51037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:51.656178   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.494290   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:50.496890   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497226   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:50.497257   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497557   50270 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:50.501988   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:50.516192   50270 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 21:16:50.516266   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:50.564641   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:50.564723   50270 ssh_runner.go:195] Run: which lz4
	I1207 21:16:50.569306   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:50.573458   50270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:50.573483   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1207 21:16:52.405191   50270 crio.go:444] Took 1.835925 seconds to copy over tarball
	I1207 21:16:52.405260   50270 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:50.077304   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.602961   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.602994   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:54.603007   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.660014   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.660053   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:55.077712   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.102038   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.102068   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:55.577664   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.586714   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.586753   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:56.077361   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:56.084665   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:16:56.096164   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:56.096196   51113 api_server.go:131] duration metric: took 6.520574302s to wait for apiserver health ...
	I1207 21:16:56.096209   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:56.096219   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:53.431611   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.954091   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:53.656773   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.659213   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:56.811148   51113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:55.499497   50270 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094207903s)
	I1207 21:16:55.499524   50270 crio.go:451] Took 3.094311 seconds to extract the tarball
	I1207 21:16:55.499532   50270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:55.539952   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:55.612029   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:55.612059   50270 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:55.612164   50270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:16:55.612282   50270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.612335   50270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.612433   50270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.612564   50270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.612575   50270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614472   50270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.614507   50270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.614513   50270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.614571   50270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.744531   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1207 21:16:55.744539   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.747157   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.748014   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.754498   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.778012   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.781417   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.886272   50270 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1207 21:16:55.886318   50270 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1207 21:16:55.886371   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.949015   50270 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1207 21:16:55.949128   50270 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.949205   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.963217   50270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1207 21:16:55.963332   50270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.963422   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.966733   50270 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1207 21:16:55.966854   50270 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.966934   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.004614   50270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1207 21:16:56.004668   50270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.004721   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.015557   50270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1207 21:16:56.015655   50270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.015714   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017603   50270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1207 21:16:56.017643   50270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.017686   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017817   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1207 21:16:56.017913   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:56.018011   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:56.018087   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1207 21:16:56.018160   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.028183   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.030370   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.222552   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:16:56.222625   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1207 21:16:56.222673   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1207 21:16:56.222680   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.222731   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1207 21:16:56.222828   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1207 21:16:56.222911   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1207 21:16:56.236367   50270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1207 21:16:56.236387   50270 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236440   50270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236444   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1207 21:16:56.455526   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:58.094353   50270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.638791166s)
	I1207 21:16:58.094525   50270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.858047565s)
	I1207 21:16:58.094552   50270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1207 21:16:58.094591   50270 cache_images.go:92] LoadImages completed in 2.482516651s
	W1207 21:16:58.094650   50270 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1207 21:16:58.094729   50270 ssh_runner.go:195] Run: crio config
	I1207 21:16:58.191059   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:16:58.191083   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:58.191108   50270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:58.191132   50270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.171 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-483745 NodeName:old-k8s-version-483745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 21:16:58.191279   50270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-483745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-483745
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.171:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:58.191389   50270 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-483745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:58.191462   50270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1207 21:16:58.204882   50270 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:58.204948   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:58.217370   50270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1207 21:16:58.237205   50270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:58.256539   50270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1207 21:16:58.276428   50270 ssh_runner.go:195] Run: grep 192.168.61.171	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:58.281568   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:58.295073   50270 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745 for IP: 192.168.61.171
	I1207 21:16:58.295112   50270 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:58.295295   50270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:58.295368   50270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:58.295493   50270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.key
	I1207 21:16:58.295589   50270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key.13a54c20
	I1207 21:16:58.295658   50270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key
	I1207 21:16:58.295817   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:58.295861   50270 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:58.295887   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:58.295922   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:58.295972   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:58.296012   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:58.296067   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:58.296936   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:58.327708   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:58.354646   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:58.379025   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:16:58.404362   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:58.433648   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:58.459739   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:58.487457   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:58.516507   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:57.214999   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:57.244196   51113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:57.264778   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:57.978177   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:57.978214   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:57.978224   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:57.978232   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:57.978241   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:57.978248   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:57.978254   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:57.978261   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:57.978267   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:57.978276   51113 system_pods.go:74] duration metric: took 713.475246ms to wait for pod list to return data ...
	I1207 21:16:57.978285   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:57.983354   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:57.983379   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:57.983389   51113 node_conditions.go:105] duration metric: took 5.099916ms to run NodePressure ...
	I1207 21:16:57.983403   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:58.583287   51113 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590472   51113 kubeadm.go:787] kubelet initialised
	I1207 21:16:58.590500   51113 kubeadm.go:788] duration metric: took 7.176115ms waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590509   51113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:58.597622   51113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.609459   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609491   51113 pod_ready.go:81] duration metric: took 11.841558ms waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.609503   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609513   51113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.620143   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620172   51113 pod_ready.go:81] duration metric: took 10.647465ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.620185   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620193   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.633821   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633850   51113 pod_ready.go:81] duration metric: took 13.645914ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.633864   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633872   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.647333   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647359   51113 pod_ready.go:81] duration metric: took 13.477348ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.647373   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647385   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.988420   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988448   51113 pod_ready.go:81] duration metric: took 341.054838ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.988457   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988465   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.388053   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388080   51113 pod_ready.go:81] duration metric: took 399.605098ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.388090   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388097   51113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.787887   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787913   51113 pod_ready.go:81] duration metric: took 399.809388ms waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.787925   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787932   51113 pod_ready.go:38] duration metric: took 1.197413161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:59.787945   51113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:16:59.801806   51113 ops.go:34] apiserver oom_adj: -16
	I1207 21:16:59.801828   51113 kubeadm.go:640] restartCluster took 24.28763849s
	I1207 21:16:59.801837   51113 kubeadm.go:406] StartCluster complete in 24.334230687s
	I1207 21:16:59.801855   51113 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.801945   51113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:16:59.804179   51113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.804458   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:16:59.804515   51113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:16:59.804612   51113 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804638   51113 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.804646   51113 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:16:59.804695   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:59.804714   51113 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804727   51113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-275828"
	I1207 21:16:59.804704   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805119   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805150   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805168   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805180   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805204   51113 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.805226   51113 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.805235   51113 addons.go:240] addon metrics-server should already be in state true
	I1207 21:16:59.805277   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805627   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805663   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.811657   51113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-275828" context rescaled to 1 replicas
	I1207 21:16:59.811696   51113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:16:59.814005   51113 out.go:177] * Verifying Kubernetes components...
	I1207 21:16:59.815636   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:16:59.822134   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1207 21:16:59.822558   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.822636   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I1207 21:16:59.822718   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1207 21:16:59.823063   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823104   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823126   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823128   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823479   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823605   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823619   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823943   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823970   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.824050   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824102   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.824193   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.824463   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824502   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.828241   51113 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.828264   51113 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:16:59.828292   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.828676   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.830577   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.841996   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1207 21:16:59.842283   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I1207 21:16:59.842697   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.842888   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.843254   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843277   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843391   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843416   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843638   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843779   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843831   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.843973   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.845644   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.845852   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.847586   51113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:59.847253   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1207 21:16:59.849062   51113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:16:57.998272   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:00.429603   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.850487   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:16:59.850500   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:16:59.850514   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849121   51113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:16:59.850564   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:16:59.850583   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849452   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.851054   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.851071   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.851664   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.852274   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.852315   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.854738   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855190   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.855204   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855394   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.855556   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.855649   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.855724   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.856210   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.856596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856720   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.856846   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.857188   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.857324   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.871856   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1207 21:16:59.872193   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.872726   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.872744   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.873088   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.873243   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.874542   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.874803   51113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:16:59.874821   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:16:59.874840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.877142   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877524   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.877547   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877753   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.877889   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.878024   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.878137   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.983279   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:17:00.040397   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:17:00.056981   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:17:00.057008   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:17:00.078195   51113 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 21:17:00.078235   51113 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:00.117369   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:17:00.117399   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:17:00.177756   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:00.177783   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:17:00.220667   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:01.338599   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.298167461s)
	I1207 21:17:01.338648   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338662   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338747   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.355434262s)
	I1207 21:17:01.338789   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338802   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338925   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.338945   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.338960   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338969   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340360   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340381   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340357   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340368   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340472   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340490   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.340504   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340785   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340788   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340804   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347722   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.347741   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.347933   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.347950   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434021   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213311264s)
	I1207 21:17:01.434084   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434099   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434391   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434413   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434410   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434423   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434434   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434627   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434637   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434648   51113 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-275828"
	I1207 21:17:01.436476   51113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:16:57.997177   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.154238   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.154261   51037 pod_ready.go:81] duration metric: took 9.519115953s waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.154270   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159402   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.159421   51037 pod_ready.go:81] duration metric: took 5.143876ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159431   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164107   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.164124   51037 pod_ready.go:81] duration metric: took 4.684573ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164134   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168711   51037 pod_ready.go:92] pod "kube-proxy-mzv22" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.168727   51037 pod_ready.go:81] duration metric: took 4.587318ms waiting for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168734   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201648   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.201676   51037 pod_ready.go:81] duration metric: took 32.935891ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201688   51037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:01.509707   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:58.544765   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:58.571376   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:58.597700   50270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:58.616720   50270 ssh_runner.go:195] Run: openssl version
	I1207 21:16:58.622830   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:58.634656   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640469   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640526   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.646624   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:58.660113   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:58.670742   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675735   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675782   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.682821   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:58.696760   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:58.710547   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.716983   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.717048   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.724400   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:58.736496   50270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:58.742587   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:58.750398   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:58.757537   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:58.764361   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:58.771280   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:58.778697   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:58.785873   50270 kubeadm.go:404] StartCluster: {Name:old-k8s-version-483745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:58.786022   50270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:58.786079   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:58.834174   50270 cri.go:89] found id: ""
	I1207 21:16:58.834262   50270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:58.845932   50270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:58.845958   50270 kubeadm.go:636] restartCluster start
	I1207 21:16:58.846025   50270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:58.855982   50270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.857458   50270 kubeconfig.go:92] found "old-k8s-version-483745" server: "https://192.168.61.171:8443"
	I1207 21:16:58.860840   50270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:58.870183   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.870235   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.881631   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.881647   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.881693   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.892422   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.393094   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.393163   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.405578   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.893104   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.893160   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.906998   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.393560   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.393629   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.405837   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.893376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.893472   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.905785   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.393118   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.393204   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.405693   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.893214   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.893348   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.906272   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.392588   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.392682   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.404717   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.893325   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.893425   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.906705   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:03.392549   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.392627   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.406493   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.437892   51113 addons.go:502] enable addons completed in 1.633389199s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:17:02.198851   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:04.199518   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:02.931262   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.431344   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.509733   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.511779   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.892711   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.892814   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.905553   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.393144   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.393236   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.406280   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.893375   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.893459   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.905715   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.393376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.393473   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.405757   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.892719   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.892800   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.906258   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.392706   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.392787   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.405913   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.893392   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.893475   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.908660   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.392944   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.393037   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.408113   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.892488   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.892602   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.905157   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:08.393126   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:08.393209   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:08.405227   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.197790   51113 node_ready.go:49] node "default-k8s-diff-port-275828" has status "Ready":"True"
	I1207 21:17:05.197814   51113 node_ready.go:38] duration metric: took 5.119553512s waiting for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:05.197825   51113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:17:05.204644   51113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:07.225887   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.229380   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:07.928733   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.929797   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.009114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:10.012079   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.870396   50270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:17:08.870427   50270 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:17:08.870439   50270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:17:08.870496   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:17:08.914337   50270 cri.go:89] found id: ""
	I1207 21:17:08.914412   50270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:17:08.932406   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:17:08.941877   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:17:08.942012   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952016   50270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952038   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.086175   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.811331   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.044161   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.117851   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.218309   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:17:10.218376   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.231007   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.754756   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.255150   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.755138   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.782482   50270 api_server.go:72] duration metric: took 1.564169408s to wait for apiserver process to appear ...
	I1207 21:17:11.782510   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:17:11.782543   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:11.729870   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.727588   51113 pod_ready.go:92] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.727621   51113 pod_ready.go:81] duration metric: took 7.52294973s waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.727635   51113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733893   51113 pod_ready.go:92] pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.733936   51113 pod_ready.go:81] duration metric: took 6.276731ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733951   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739431   51113 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.739456   51113 pod_ready.go:81] duration metric: took 5.495838ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739467   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745435   51113 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.745456   51113 pod_ready.go:81] duration metric: took 5.98053ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745468   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751301   51113 pod_ready.go:92] pod "kube-proxy-nmx2z" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.751323   51113 pod_ready.go:81] duration metric: took 5.845741ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751333   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122896   51113 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:13.122923   51113 pod_ready.go:81] duration metric: took 371.582675ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122936   51113 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:11.931676   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.433505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.510180   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.511615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.519216   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.783319   50270 api_server.go:269] stopped: https://192.168.61.171:8443/healthz: Get "https://192.168.61.171:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 21:17:16.783432   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.468175   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:17:17.468210   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:17:17.968919   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.975181   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:17.975206   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.469287   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.476311   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:18.476340   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.968605   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.974285   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:17:18.981956   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:17:18.981983   50270 api_server.go:131] duration metric: took 7.199466057s to wait for apiserver health ...
	I1207 21:17:18.981994   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:17:18.982000   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:17:18.983962   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:17:15.433488   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:17.434321   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.931755   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.430606   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.010615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.512114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:18.985481   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:17:18.994841   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:17:19.015418   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:17:19.029654   50270 system_pods.go:59] 7 kube-system pods found
	I1207 21:17:19.029685   50270 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:17:19.029692   50270 system_pods.go:61] "etcd-old-k8s-version-483745" [4a920248-1b35-4834-9e6f-a0e7567b5bb8] Running
	I1207 21:17:19.029699   50270 system_pods.go:61] "kube-apiserver-old-k8s-version-483745" [aaba6fb9-56a1-497d-a398-5c685f5500dd] Running
	I1207 21:17:19.029706   50270 system_pods.go:61] "kube-controller-manager-old-k8s-version-483745" [a13bda00-a0f4-4f59-8b52-65589579efcf] Running
	I1207 21:17:19.029711   50270 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:17:19.029715   50270 system_pods.go:61] "kube-scheduler-old-k8s-version-483745" [4fc3e12a-e294-457e-912f-0ed765ad4def] Running
	I1207 21:17:19.029718   50270 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:17:19.029726   50270 system_pods.go:74] duration metric: took 14.290629ms to wait for pod list to return data ...
	I1207 21:17:19.029739   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:17:19.033868   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:17:19.033897   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:17:19.033911   50270 node_conditions.go:105] duration metric: took 4.166175ms to run NodePressure ...
	I1207 21:17:19.033945   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:19.284413   50270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:17:19.288373   50270 retry.go:31] will retry after 182.556746ms: kubelet not initialised
	I1207 21:17:19.479987   50270 retry.go:31] will retry after 253.110045ms: kubelet not initialised
	I1207 21:17:19.744586   50270 retry.go:31] will retry after 608.133785ms: kubelet not initialised
	I1207 21:17:20.357758   50270 retry.go:31] will retry after 829.182382ms: kubelet not initialised
	I1207 21:17:21.192621   50270 retry.go:31] will retry after 998.365497ms: kubelet not initialised
	I1207 21:17:22.196882   50270 retry.go:31] will retry after 1.144379185s: kubelet not initialised
	I1207 21:17:23.346660   50270 retry.go:31] will retry after 4.175853771s: kubelet not initialised
	I1207 21:17:19.937119   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:22.433221   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.430858   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:23.929526   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:25.932244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:24.011486   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.509908   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.529200   50270 retry.go:31] will retry after 6.099259697s: kubelet not initialised
	I1207 21:17:24.932035   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.932432   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:28.935455   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.933244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:30.431008   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:29.009917   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.509259   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.432441   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.933226   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:32.431713   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:34.931903   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.510686   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:35.511611   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.635018   50270 retry.go:31] will retry after 3.426713545s: kubelet not initialised
	I1207 21:17:37.067021   50270 retry.go:31] will retry after 7.020738309s: kubelet not initialised
	I1207 21:17:35.933872   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.432200   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:37.432208   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:39.432443   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.008964   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.013143   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.434554   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.935808   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:41.931614   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.431445   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.510798   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:45.010221   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.093245   50270 retry.go:31] will retry after 15.092242293s: kubelet not initialised
	I1207 21:17:45.433353   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.933249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:46.931078   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.430564   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.510355   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:50.010022   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.935001   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.433167   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:51.430664   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:53.431310   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.431508   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.010127   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:54.937299   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.432126   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.929516   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.929800   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.511723   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:00.010732   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.190582   50270 retry.go:31] will retry after 18.708242221s: kubelet not initialised
	I1207 21:17:59.932898   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.435773   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.429487   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.931336   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.011470   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.508873   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:06.510378   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.932311   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.434111   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.431033   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.931058   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.009614   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.009942   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.932527   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.933100   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.432890   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:12.429420   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.431778   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:13.010085   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:15.509812   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:17.907480   50270 kubeadm.go:787] kubelet initialised
	I1207 21:18:17.907516   50270 kubeadm.go:788] duration metric: took 58.6230723s waiting for restarted kubelet to initialise ...
	I1207 21:18:17.907523   50270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:18:17.912349   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917692   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.917710   50270 pod_ready.go:81] duration metric: took 5.339125ms waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917718   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923173   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.923192   50270 pod_ready.go:81] duration metric: took 5.469466ms waiting for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923200   50270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928824   50270 pod_ready.go:92] pod "etcd-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.928846   50270 pod_ready.go:81] duration metric: took 5.638159ms waiting for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928856   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.934993   50270 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.935014   50270 pod_ready.go:81] duration metric: took 6.149728ms waiting for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.935025   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311907   50270 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.311934   50270 pod_ready.go:81] duration metric: took 376.900024ms waiting for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311947   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:16.931768   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:16.930954   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932194   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.009341   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.010383   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.709795   50270 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.709818   50270 pod_ready.go:81] duration metric: took 397.865434ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.709828   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107018   50270 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:19.107046   50270 pod_ready.go:81] duration metric: took 397.21085ms waiting for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107074   50270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:21.413113   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.414993   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.937780   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.432192   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:21.429764   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.430826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.930929   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:22.510894   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.009872   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.914333   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.914486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.432249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.432529   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.930973   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.430718   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.510016   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.009983   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.415400   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.912237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:29.932694   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.433150   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.432680   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.931118   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.010572   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.508896   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.509628   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.913374   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.914250   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.933409   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.432655   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.432740   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.430165   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.930630   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.009629   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.009658   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:38.914325   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:40.915158   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.413980   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.932574   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.432525   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:42.431330   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.929635   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.009978   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.010954   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.414082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.415225   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:46.932342   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:48.932460   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.429890   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.931948   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.508820   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.508885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.510909   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.916969   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.414590   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.431888   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:53.432497   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.429836   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.429987   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.010442   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.520121   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.415187   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.914505   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:55.433372   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:57.437496   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.932937   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.430774   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.010885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.510473   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.413820   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.413911   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.414163   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.932159   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.932344   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:04.432873   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.430926   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.930199   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.930253   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.511496   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.512541   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.913832   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.915554   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:06.433629   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.933148   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.931760   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.431655   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.009852   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.010279   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.415114   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.913846   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:11.433166   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:13.933572   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.930147   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.935480   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.010617   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.510815   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:15.414959   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.913372   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:16.433375   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:18.932915   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.436017   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.933613   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.008855   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.010583   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.510650   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.913760   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.913931   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.434113   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.932185   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:22.429942   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.432486   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.009731   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.513595   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.913964   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:25.915033   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.415173   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.433721   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.932763   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.934197   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.432795   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.008998   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.011163   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:30.912991   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:32.914672   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.432802   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.932626   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.930505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.510138   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.010166   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:34.915019   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:37.414169   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:35.933595   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.432419   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.433061   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.929697   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.930753   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.509265   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.509898   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:39.414719   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:41.914208   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.932356   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.932643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.430519   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.930095   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.510763   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:44.511006   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.914874   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:46.414739   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.431904   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.930507   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.930634   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.009537   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.009825   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.010633   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:48.914101   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.413288   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:50.433022   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:52.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.930920   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:54.433488   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.508693   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.509440   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.913446   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.914532   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.416064   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.432116   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:57.935271   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:56.929900   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.931501   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.009318   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.510190   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.915025   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.414806   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.432326   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:02.432758   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:04.434643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:01.431826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.931648   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.010188   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.010498   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.914269   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.914640   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:06.931909   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.431136   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.932438   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.509186   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:09.511791   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.415605   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.918130   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.934599   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.434477   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.430502   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.434943   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.008903   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:14.010390   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:16.509062   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.415237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.914465   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.435338   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.933559   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.931293   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:18.408309   50624 pod_ready.go:81] duration metric: took 4m0.000858815s waiting for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:18.408355   50624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:18.408376   50624 pod_ready.go:38] duration metric: took 4m11.111070516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:18.408405   50624 kubeadm.go:640] restartCluster took 4m30.625453328s
	W1207 21:20:18.408479   50624 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:18.408513   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:18.510036   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:20.510485   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.915160   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:21.915544   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.940064   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:22.432481   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:24.432791   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.010158   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:25.509777   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.915685   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.414017   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.415525   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.435601   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.932153   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.009824   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.509369   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.372266   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.96372485s)
	I1207 21:20:32.372349   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:32.386002   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:20:32.395757   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:20:32.406709   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:20:32.406761   50624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:20:32.465707   50624 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:20:32.465842   50624 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:20:32.636031   50624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:20:32.636171   50624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:20:32.636296   50624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:20:32.892368   50624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:20:32.894341   50624 out.go:204]   - Generating certificates and keys ...
	I1207 21:20:32.894484   50624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:20:32.894581   50624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:20:32.894717   50624 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:20:32.894799   50624 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:20:32.895289   50624 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:20:32.895583   50624 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:20:32.896112   50624 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:20:32.896577   50624 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:20:32.897032   50624 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:20:32.897567   50624 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:20:32.897804   50624 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:20:32.897886   50624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:20:32.942322   50624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:20:33.084899   50624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:20:33.286309   50624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:20:33.482188   50624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:20:33.483077   50624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:20:33.487928   50624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:20:30.912937   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.914703   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.934926   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.431849   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.489853   50624 out.go:204]   - Booting up control plane ...
	I1207 21:20:33.490021   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:20:33.490177   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:20:33.490458   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:20:33.509319   50624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:20:33.509448   50624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:20:33.509501   50624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:20:33.654452   50624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:20:32.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.510930   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.918486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.414467   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:35.432767   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.931132   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.009506   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.011200   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.509897   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.657033   50624 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003082 seconds
	I1207 21:20:41.657193   50624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:20:41.673142   50624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:20:42.218438   50624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:20:42.218706   50624 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-598346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:20:42.745090   50624 kubeadm.go:322] [bootstrap-token] Using token: 74zooz.4uhmxlwojs4pjw69
	I1207 21:20:42.746934   50624 out.go:204]   - Configuring RBAC rules ...
	I1207 21:20:42.747111   50624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:20:42.762521   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:20:42.776210   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:20:42.781152   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:20:42.786698   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:20:42.795815   50624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:20:42.811407   50624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:20:43.073430   50624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:20:43.167611   50624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:20:43.168880   50624 kubeadm.go:322] 
	I1207 21:20:43.168970   50624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:20:43.169014   50624 kubeadm.go:322] 
	I1207 21:20:43.169111   50624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:20:43.169132   50624 kubeadm.go:322] 
	I1207 21:20:43.169163   50624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:20:43.169239   50624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:20:43.169314   50624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:20:43.169322   50624 kubeadm.go:322] 
	I1207 21:20:43.169394   50624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:20:43.169402   50624 kubeadm.go:322] 
	I1207 21:20:43.169475   50624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:20:43.169500   50624 kubeadm.go:322] 
	I1207 21:20:43.169591   50624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:20:43.169701   50624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:20:43.169799   50624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:20:43.169811   50624 kubeadm.go:322] 
	I1207 21:20:43.169930   50624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:20:43.170066   50624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:20:43.170078   50624 kubeadm.go:322] 
	I1207 21:20:43.170177   50624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170303   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:20:43.170332   50624 kubeadm.go:322] 	--control-plane 
	I1207 21:20:43.170338   50624 kubeadm.go:322] 
	I1207 21:20:43.170463   50624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:20:43.170474   50624 kubeadm.go:322] 
	I1207 21:20:43.170590   50624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170717   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:20:43.171438   50624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:20:43.171461   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:20:43.171467   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:20:43.173556   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:20:39.415520   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.416257   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.933233   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.933860   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:44.432482   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.175267   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:20:43.199404   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:20:43.237091   50624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:20:43.237150   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.237203   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=embed-certs-598346 minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.303369   50624 ops.go:34] apiserver oom_adj: -16
	I1207 21:20:43.670500   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.788364   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.394973   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.894494   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.394695   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.895141   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.509949   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.511007   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.915384   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.916082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:47.916757   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.432649   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:48.434738   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.394706   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:46.894743   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.395117   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.894780   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.395408   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.895349   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.394860   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.894472   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.395102   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.895157   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.512284   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.011848   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.413787   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.913793   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.933240   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.935428   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:51.394691   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:51.895193   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.395131   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.894787   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.394652   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.895139   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.395160   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.895153   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.394410   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.584599   50624 kubeadm.go:1088] duration metric: took 12.347498848s to wait for elevateKubeSystemPrivileges.
	I1207 21:20:55.584628   50624 kubeadm.go:406] StartCluster complete in 5m7.857234007s
	I1207 21:20:55.584645   50624 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.584733   50624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:20:55.587311   50624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.587607   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:20:55.587630   50624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:20:55.587708   50624 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-598346"
	I1207 21:20:55.587716   50624 addons.go:69] Setting default-storageclass=true in profile "embed-certs-598346"
	I1207 21:20:55.587728   50624 addons.go:69] Setting metrics-server=true in profile "embed-certs-598346"
	I1207 21:20:55.587739   50624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-598346"
	I1207 21:20:55.587760   50624 addons.go:231] Setting addon metrics-server=true in "embed-certs-598346"
	W1207 21:20:55.587769   50624 addons.go:240] addon metrics-server should already be in state true
	I1207 21:20:55.587826   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587736   50624 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-598346"
	W1207 21:20:55.587852   50624 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:20:55.587901   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587824   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:20:55.588192   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588202   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588223   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588224   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588284   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588308   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.605717   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I1207 21:20:55.605750   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1207 21:20:55.605726   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1207 21:20:55.606254   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606305   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606338   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606778   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606803   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606823   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606844   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606826   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606904   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.607178   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607218   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607274   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607420   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.607776   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607816   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.607818   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607849   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.610610   50624 addons.go:231] Setting addon default-storageclass=true in "embed-certs-598346"
	W1207 21:20:55.610628   50624 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:20:55.610647   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.610902   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.610927   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.624530   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I1207 21:20:55.624997   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.625474   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.625492   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.625833   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.626016   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.626236   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1207 21:20:55.626715   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627093   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1207 21:20:55.627538   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627700   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.627709   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628044   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.628061   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628109   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.628112   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.629910   50624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:20:55.628721   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.628756   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.631270   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.631338   50624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.631357   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:20:55.631371   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.631724   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.634618   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.636632   50624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:20:55.635162   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.635740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.638311   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:20:55.638331   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:20:55.638354   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.638318   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.638427   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.638930   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.639110   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.639264   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.642987   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643401   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.643432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643605   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.643794   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.643947   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.644065   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.649214   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I1207 21:20:55.649604   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.650085   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.650106   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.650583   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.650740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.657356   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.657691   50624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.657708   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:20:55.657727   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.659345   50624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-598346" context rescaled to 1 replicas
	I1207 21:20:55.659381   50624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:20:55.660949   50624 out.go:177] * Verifying Kubernetes components...
	I1207 21:20:55.662172   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:55.661748   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662288   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.662323   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662617   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.662821   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.662992   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.663175   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.825166   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.850131   50624 node_ready.go:35] waiting up to 6m0s for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.850203   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:20:55.850365   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:20:55.850378   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:20:55.879031   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.896010   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:20:55.896034   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:20:55.910575   50624 node_ready.go:49] node "embed-certs-598346" has status "Ready":"True"
	I1207 21:20:55.910603   50624 node_ready.go:38] duration metric: took 60.438039ms waiting for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.910615   50624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:55.976847   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:55.976874   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:20:55.981345   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:56.068591   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:52.509374   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:55.012033   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:54.915300   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.414020   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.761169   50624 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761195   50624 pod_ready.go:81] duration metric: took 1.779826027s waiting for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:57.761205   50624 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761212   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.813172   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962919124s)
	I1207 21:20:58.813238   50624 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:20:58.813195   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.934130104s)
	I1207 21:20:58.813281   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813299   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813520   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.988311627s)
	I1207 21:20:58.813560   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813572   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813757   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.813776   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.813787   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813796   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813831   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814093   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814097   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814110   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814132   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.814152   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.814511   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814531   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.839304   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.839329   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.839611   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.839653   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.839663   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.859922   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.791233211s)
	I1207 21:20:58.859979   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.859998   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860412   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860469   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860483   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.860495   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860430   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.860749   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860768   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860778   50624 addons.go:467] Verifying addon metrics-server=true in "embed-certs-598346"
	I1207 21:20:58.863874   50624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:20:55.431955   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.434174   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:58.865423   50624 addons.go:502] enable addons completed in 3.277791662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:20:58.894841   50624 pod_ready.go:92] pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.894877   50624 pod_ready.go:81] duration metric: took 1.133651819s waiting for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.894891   50624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.906981   50624 pod_ready.go:92] pod "etcd-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.907009   50624 pod_ready.go:81] duration metric: took 12.109561ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.907020   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918176   50624 pod_ready.go:92] pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.918198   50624 pod_ready.go:81] duration metric: took 11.169952ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918211   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928763   50624 pod_ready.go:92] pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.928791   50624 pod_ready.go:81] duration metric: took 10.570922ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928804   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163618   50624 pod_ready.go:92] pod "kube-proxy-h4pmv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.163652   50624 pod_ready.go:81] duration metric: took 1.234839709s waiting for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163664   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455887   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.455909   50624 pod_ready.go:81] duration metric: took 292.236645ms waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455917   50624 pod_ready.go:38] duration metric: took 4.545291617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:00.455932   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:00.455974   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:00.474126   50624 api_server.go:72] duration metric: took 4.814712718s to wait for apiserver process to appear ...
	I1207 21:21:00.474151   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:00.474170   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:21:00.480909   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:21:00.482468   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:00.482491   50624 api_server.go:131] duration metric: took 8.332499ms to wait for apiserver health ...
	I1207 21:21:00.482500   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:00.658932   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:00.658965   50624 system_pods.go:61] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:00.658973   50624 system_pods.go:61] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:00.658980   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:00.658986   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:00.658992   50624 system_pods.go:61] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:00.658999   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:00.659010   50624 system_pods.go:61] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:00.659018   50624 system_pods.go:61] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:00.659036   50624 system_pods.go:74] duration metric: took 176.530206ms to wait for pod list to return data ...
	I1207 21:21:00.659049   50624 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:00.853965   50624 default_sa.go:45] found service account: "default"
	I1207 21:21:00.853997   50624 default_sa.go:55] duration metric: took 194.939162ms for default service account to be created ...
	I1207 21:21:00.854008   50624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:01.058565   50624 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:01.058594   50624 system_pods.go:89] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:01.058600   50624 system_pods.go:89] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:01.058604   50624 system_pods.go:89] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:01.058609   50624 system_pods.go:89] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:01.058613   50624 system_pods.go:89] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:01.058617   50624 system_pods.go:89] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:01.058634   50624 system_pods.go:89] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:01.058640   50624 system_pods.go:89] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:01.058651   50624 system_pods.go:126] duration metric: took 204.636417ms to wait for k8s-apps to be running ...
	I1207 21:21:01.058664   50624 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:01.058707   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:01.081694   50624 system_svc.go:56] duration metric: took 23.018184ms WaitForService to wait for kubelet.
	I1207 21:21:01.081719   50624 kubeadm.go:581] duration metric: took 5.422310896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:01.081736   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:01.254804   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:01.254838   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:01.254851   50624 node_conditions.go:105] duration metric: took 173.110501ms to run NodePressure ...
	I1207 21:21:01.254866   50624 start.go:228] waiting for startup goroutines ...
	I1207 21:21:01.254875   50624 start.go:233] waiting for cluster config update ...
	I1207 21:21:01.254888   50624 start.go:242] writing updated cluster config ...
	I1207 21:21:01.255260   50624 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:01.312696   50624 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:01.314740   50624 out.go:177] * Done! kubectl is now configured to use "embed-certs-598346" cluster and "default" namespace by default
	I1207 21:20:57.510167   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.202324   51037 pod_ready.go:81] duration metric: took 4m0.000618876s waiting for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:59.202361   51037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:59.202386   51037 pod_ready.go:38] duration metric: took 4m13.59894194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:59.202417   51037 kubeadm.go:640] restartCluster took 4m34.848470509s
	W1207 21:20:59.202490   51037 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:59.202525   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:59.416072   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.416132   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.932924   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.933678   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:04.432068   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:03.914100   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.414149   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.432277   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.432456   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.914660   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:10.927167   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.414941   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.233635   51037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.031083103s)
	I1207 21:21:13.233717   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:13.246941   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:21:13.256697   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:21:13.265143   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:21:13.265188   51037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:21:13.323766   51037 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:21:13.323875   51037 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:21:13.477749   51037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:21:13.477938   51037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:21:13.478083   51037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:21:13.750607   51037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:21:13.752541   51037 out.go:204]   - Generating certificates and keys ...
	I1207 21:21:13.752655   51037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:21:13.752735   51037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:21:13.752887   51037 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:21:13.753031   51037 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:21:13.753250   51037 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:21:13.753432   51037 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:21:13.753647   51037 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:21:13.753850   51037 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:21:13.754167   51037 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:21:13.755114   51037 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:21:13.755889   51037 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:21:13.756020   51037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:21:13.859938   51037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:21:14.193613   51037 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:21:14.239766   51037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:21:14.448306   51037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:21:14.537558   51037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:21:14.538242   51037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:21:14.542910   51037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:21:10.432632   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:12.932769   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.123869   51113 pod_ready.go:81] duration metric: took 4m0.000917841s waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:21:13.123898   51113 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:21:13.123907   51113 pod_ready.go:38] duration metric: took 4m7.926070649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:13.123923   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:13.123951   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:13.124010   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:13.197887   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.197918   51113 cri.go:89] found id: ""
	I1207 21:21:13.197947   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:13.198016   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.203887   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:13.203953   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:13.250727   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:13.250754   51113 cri.go:89] found id: ""
	I1207 21:21:13.250766   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:13.250823   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.255837   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:13.255881   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:13.297690   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:13.297719   51113 cri.go:89] found id: ""
	I1207 21:21:13.297729   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:13.297786   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.303238   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:13.303301   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:13.349838   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:13.349879   51113 cri.go:89] found id: ""
	I1207 21:21:13.349890   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:13.349960   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.354368   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:13.354423   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:13.394201   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.394230   51113 cri.go:89] found id: ""
	I1207 21:21:13.394240   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:13.394298   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.398418   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:13.398489   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:13.443027   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.443055   51113 cri.go:89] found id: ""
	I1207 21:21:13.443065   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:13.443129   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.447530   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:13.447601   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:13.491670   51113 cri.go:89] found id: ""
	I1207 21:21:13.491712   51113 logs.go:284] 0 containers: []
	W1207 21:21:13.491720   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:13.491735   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:13.491795   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:13.541386   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.541414   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:13.541421   51113 cri.go:89] found id: ""
	I1207 21:21:13.541430   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:13.541491   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.546270   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.551524   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:13.551549   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:13.630073   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:13.630119   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.680287   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:13.680318   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.733406   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:13.733442   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:13.751810   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:13.751845   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:13.905859   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:13.905889   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.950595   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:13.950626   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.993833   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:13.993862   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:14.488205   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:14.488242   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:14.531169   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:14.531201   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:14.588229   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:14.588268   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:14.642280   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:14.642310   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:14.693027   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:14.693062   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:14.544787   51037 out.go:204]   - Booting up control plane ...
	I1207 21:21:14.544925   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:21:14.545032   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:21:14.545988   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:21:14.565092   51037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:21:14.566289   51037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:21:14.566356   51037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:21:14.723698   51037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:21:15.913198   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.914942   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.234321   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:17.253156   51113 api_server.go:72] duration metric: took 4m17.441427611s to wait for apiserver process to appear ...
	I1207 21:21:17.253187   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:17.253223   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:17.253330   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:17.301526   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.301557   51113 cri.go:89] found id: ""
	I1207 21:21:17.301573   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:17.301631   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.306049   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:17.306124   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:17.359167   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:17.359195   51113 cri.go:89] found id: ""
	I1207 21:21:17.359205   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:17.359264   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.363853   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:17.363919   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:17.403245   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:17.403271   51113 cri.go:89] found id: ""
	I1207 21:21:17.403281   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:17.403345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.407694   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:17.407771   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:17.462260   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.462287   51113 cri.go:89] found id: ""
	I1207 21:21:17.462298   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:17.462355   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.467157   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:17.467214   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:17.502206   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:17.502236   51113 cri.go:89] found id: ""
	I1207 21:21:17.502246   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:17.502301   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.507601   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:17.507672   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:17.550248   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:17.550275   51113 cri.go:89] found id: ""
	I1207 21:21:17.550284   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:17.550345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.554817   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:17.554879   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:17.595234   51113 cri.go:89] found id: ""
	I1207 21:21:17.595262   51113 logs.go:284] 0 containers: []
	W1207 21:21:17.595272   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:17.595280   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:17.595331   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:17.657464   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.657491   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.657501   51113 cri.go:89] found id: ""
	I1207 21:21:17.657511   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:17.657566   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.662364   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.667878   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:17.667901   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.716160   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:17.716187   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.770503   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:17.770548   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.836877   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:17.836933   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.881499   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:17.881536   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:17.930792   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:17.930837   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:17.945486   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:17.945519   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:18.087782   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:18.087825   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:18.149272   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:18.149312   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:18.196792   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:18.196829   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:18.243539   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:18.243575   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:18.305424   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:18.305465   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:18.772176   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:18.772213   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:19.916426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.414318   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.728616   51037 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002882 seconds
	I1207 21:21:22.745711   51037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:21:22.772747   51037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:21:23.310807   51037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:21:23.311004   51037 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-950431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:21:23.826933   51037 kubeadm.go:322] [bootstrap-token] Using token: ft70hz.nx8ps5rcldht4kzk
	I1207 21:21:23.828530   51037 out.go:204]   - Configuring RBAC rules ...
	I1207 21:21:23.828676   51037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:21:23.836739   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:21:23.845207   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:21:23.852566   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:21:23.856912   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:21:23.863418   51037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:21:23.881183   51037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:21:24.185664   51037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:21:24.246564   51037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:21:24.246626   51037 kubeadm.go:322] 
	I1207 21:21:24.246741   51037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:21:24.246761   51037 kubeadm.go:322] 
	I1207 21:21:24.246858   51037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:21:24.246868   51037 kubeadm.go:322] 
	I1207 21:21:24.246898   51037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:21:24.246967   51037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:21:24.247047   51037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:21:24.247063   51037 kubeadm.go:322] 
	I1207 21:21:24.247122   51037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:21:24.247132   51037 kubeadm.go:322] 
	I1207 21:21:24.247183   51037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:21:24.247193   51037 kubeadm.go:322] 
	I1207 21:21:24.247259   51037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:21:24.247361   51037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:21:24.247450   51037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:21:24.247461   51037 kubeadm.go:322] 
	I1207 21:21:24.247565   51037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:21:24.247669   51037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:21:24.247678   51037 kubeadm.go:322] 
	I1207 21:21:24.247777   51037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.247910   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:21:24.247941   51037 kubeadm.go:322] 	--control-plane 
	I1207 21:21:24.247951   51037 kubeadm.go:322] 
	I1207 21:21:24.248049   51037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:21:24.248059   51037 kubeadm.go:322] 
	I1207 21:21:24.248150   51037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.248271   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:21:24.249001   51037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:21:24.249031   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:21:24.249041   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:21:24.250938   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:21:21.338084   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:21:21.343250   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:21:21.344871   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:21.344892   51113 api_server.go:131] duration metric: took 4.091697961s to wait for apiserver health ...
	I1207 21:21:21.344901   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:21.344930   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:21.344990   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:21.385908   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:21.385944   51113 cri.go:89] found id: ""
	I1207 21:21:21.385954   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:21.386011   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.390584   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:21.390655   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:21.435206   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:21.435226   51113 cri.go:89] found id: ""
	I1207 21:21:21.435236   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:21.435294   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.441020   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:21.441091   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:21.480294   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:21.480319   51113 cri.go:89] found id: ""
	I1207 21:21:21.480329   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:21.480384   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.484454   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:21.484511   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:21.531792   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:21.531817   51113 cri.go:89] found id: ""
	I1207 21:21:21.531826   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:21.531884   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.536194   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:21.536265   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:21.579784   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:21.579803   51113 cri.go:89] found id: ""
	I1207 21:21:21.579810   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:21.579852   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.583895   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:21.583961   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:21.623350   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:21.623383   51113 cri.go:89] found id: ""
	I1207 21:21:21.623393   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:21.623450   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.628173   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:21.628226   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:21.670522   51113 cri.go:89] found id: ""
	I1207 21:21:21.670549   51113 logs.go:284] 0 containers: []
	W1207 21:21:21.670559   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:21.670565   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:21.670622   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:21.717892   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:21.717918   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:21.717939   51113 cri.go:89] found id: ""
	I1207 21:21:21.717958   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:21.718024   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.724161   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.728796   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:21.728817   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:21.743574   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:21.743599   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:22.158202   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:22.158247   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:22.224569   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:22.224610   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:22.376503   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:22.376539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:22.421207   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:22.421236   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:22.468100   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:22.468130   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:22.514216   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:22.514246   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:22.563190   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:22.563217   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:22.622636   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:22.622673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:22.673280   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:22.673309   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:22.724767   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:22.724799   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:22.787505   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:22.787539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:25.337268   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:25.337297   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.337304   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.337312   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.337319   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.337325   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.337331   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.337338   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.337347   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.337354   51113 system_pods.go:74] duration metric: took 3.99244703s to wait for pod list to return data ...
	I1207 21:21:25.337363   51113 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:25.340607   51113 default_sa.go:45] found service account: "default"
	I1207 21:21:25.340630   51113 default_sa.go:55] duration metric: took 3.261042ms for default service account to be created ...
	I1207 21:21:25.340637   51113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:25.351616   51113 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:25.351640   51113 system_pods.go:89] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.351646   51113 system_pods.go:89] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.351651   51113 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.351656   51113 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.351659   51113 system_pods.go:89] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.351663   51113 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.351670   51113 system_pods.go:89] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.351675   51113 system_pods.go:89] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.351681   51113 system_pods.go:126] duration metric: took 11.04015ms to wait for k8s-apps to be running ...
	I1207 21:21:25.351686   51113 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:25.351725   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:25.368853   51113 system_svc.go:56] duration metric: took 17.156347ms WaitForService to wait for kubelet.
	I1207 21:21:25.368883   51113 kubeadm.go:581] duration metric: took 4m25.557159696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:25.368908   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:25.372224   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:25.372247   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:25.372257   51113 node_conditions.go:105] duration metric: took 3.343495ms to run NodePressure ...
	I1207 21:21:25.372268   51113 start.go:228] waiting for startup goroutines ...
	I1207 21:21:25.372273   51113 start.go:233] waiting for cluster config update ...
	I1207 21:21:25.372282   51113 start.go:242] writing updated cluster config ...
	I1207 21:21:25.372598   51113 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:25.426941   51113 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:25.429177   51113 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-275828" cluster and "default" namespace by default
	I1207 21:21:24.252623   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:21:24.278852   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:21:24.346081   51037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:21:24.346144   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.346161   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=no-preload-950431 minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.458044   51037 ops.go:34] apiserver oom_adj: -16
	I1207 21:21:24.715413   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.801098   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.396467   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.895918   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:26.396185   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.914616   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.915500   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.896260   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.396455   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.896542   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.396551   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.896865   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.395921   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.896782   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.396223   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.896296   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:31.395834   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.414005   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.415580   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.896019   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.395959   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.895826   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.396820   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.896674   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.396109   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.896537   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.396438   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.896709   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.396689   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.896404   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:37.062200   51037 kubeadm.go:1088] duration metric: took 12.716124423s to wait for elevateKubeSystemPrivileges.
	I1207 21:21:37.062237   51037 kubeadm.go:406] StartCluster complete in 5m12.769835709s
	I1207 21:21:37.062255   51037 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.062333   51037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:21:37.064828   51037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.065103   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:21:37.065193   51037 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:21:37.065273   51037 addons.go:69] Setting storage-provisioner=true in profile "no-preload-950431"
	I1207 21:21:37.065291   51037 addons.go:231] Setting addon storage-provisioner=true in "no-preload-950431"
	W1207 21:21:37.065299   51037 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:21:37.065297   51037 addons.go:69] Setting default-storageclass=true in profile "no-preload-950431"
	I1207 21:21:37.065323   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:21:37.065329   51037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950431"
	I1207 21:21:37.065349   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065302   51037 addons.go:69] Setting metrics-server=true in profile "no-preload-950431"
	I1207 21:21:37.065374   51037 addons.go:231] Setting addon metrics-server=true in "no-preload-950431"
	W1207 21:21:37.065388   51037 addons.go:240] addon metrics-server should already be in state true
	I1207 21:21:37.065423   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065737   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065780   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065772   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065821   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.083129   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1207 21:21:37.083593   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I1207 21:21:37.083761   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084047   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084356   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I1207 21:21:37.084566   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084590   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084625   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084645   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084667   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084935   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.084997   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085044   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.085065   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.085381   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085505   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085542   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.085741   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.085909   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085964   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.089134   51037 addons.go:231] Setting addon default-storageclass=true in "no-preload-950431"
	W1207 21:21:37.089153   51037 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:21:37.089180   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.089673   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.089712   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.101048   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1207 21:21:37.101516   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.102279   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.102300   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.102727   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.103618   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.106122   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.107693   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I1207 21:21:37.107843   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I1207 21:21:37.108128   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108521   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108696   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.108709   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.109070   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.109204   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.109227   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.114090   51037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:21:37.109833   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.109949   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.115707   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.115743   51037 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.115765   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:21:37.115789   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.116919   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.119056   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.120429   51037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:21:37.121716   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:21:37.121741   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:21:37.121759   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.119470   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.121830   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.121852   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.120097   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.122062   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.122309   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.122432   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.124738   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.124992   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.125012   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.125346   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.125523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.125647   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.125817   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.136943   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1207 21:21:37.137636   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.138210   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.138233   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.138659   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.138896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.140541   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.140792   51037 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.140808   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:21:37.140824   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.144251   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144616   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.144667   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144856   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.145009   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.145167   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.145260   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.157909   51037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-950431" context rescaled to 1 replicas
	I1207 21:21:37.157965   51037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:21:37.159529   51037 out.go:177] * Verifying Kubernetes components...
	I1207 21:21:33.914686   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:35.916902   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:38.413489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:37.160895   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:37.329265   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:21:37.476842   51037 node_ready.go:35] waiting up to 6m0s for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481433   51037 node_ready.go:49] node "no-preload-950431" has status "Ready":"True"
	I1207 21:21:37.481456   51037 node_ready.go:38] duration metric: took 4.57457ms waiting for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481467   51037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:37.499564   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:37.556110   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:21:37.556142   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:21:37.558917   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.575696   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.653458   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:21:37.653478   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:21:37.782294   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:37.782322   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:21:37.850657   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:38.161232   51037 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1207 21:21:38.734356   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.175402881s)
	I1207 21:21:38.734410   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734420   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734423   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.158690213s)
	I1207 21:21:38.734466   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734482   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734859   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734873   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734860   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.734911   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.734927   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734935   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734913   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735006   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735016   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.735028   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.735166   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735192   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735321   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.735357   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735369   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.772677   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.772700   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.772969   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.773038   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.773055   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.056990   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206289914s)
	I1207 21:21:39.057048   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057064   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057441   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:39.057480   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057502   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057520   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057534   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057809   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057826   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057845   51037 addons.go:467] Verifying addon metrics-server=true in "no-preload-950431"
	I1207 21:21:39.060003   51037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:21:39.061797   51037 addons.go:502] enable addons completed in 1.996609653s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:21:39.690111   51037 pod_ready.go:102] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:40.698712   51037 pod_ready.go:92] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.698739   51037 pod_ready.go:81] duration metric: took 3.199144567s waiting for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.698751   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714087   51037 pod_ready.go:92] pod "coredns-76f75df574-hsjsq" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.714108   51037 pod_ready.go:81] duration metric: took 15.350128ms waiting for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714117   51037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725058   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.725078   51037 pod_ready.go:81] duration metric: took 10.955777ms waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725089   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742099   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.742127   51037 pod_ready.go:81] duration metric: took 17.029172ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742140   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748676   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.748699   51037 pod_ready.go:81] duration metric: took 6.549805ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748713   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988512   51037 pod_ready.go:92] pod "kube-proxy-6v8td" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:41.988537   51037 pod_ready.go:81] duration metric: took 1.239816309s waiting for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988545   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283301   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:42.283330   51037 pod_ready.go:81] duration metric: took 294.777559ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283341   51037 pod_ready.go:38] duration metric: took 4.801864648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:42.283360   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:42.283420   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:42.308983   51037 api_server.go:72] duration metric: took 5.150987572s to wait for apiserver process to appear ...
	I1207 21:21:42.309013   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:42.309036   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:21:42.315006   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:21:42.316220   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:21:42.316240   51037 api_server.go:131] duration metric: took 7.219959ms to wait for apiserver health ...
	I1207 21:21:42.316247   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:42.485186   51037 system_pods.go:59] 9 kube-system pods found
	I1207 21:21:42.485214   51037 system_pods.go:61] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.485218   51037 system_pods.go:61] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.485223   51037 system_pods.go:61] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.485228   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.485234   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.485237   51037 system_pods.go:61] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.485242   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.485251   51037 system_pods.go:61] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.485258   51037 system_pods.go:61] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.485278   51037 system_pods.go:74] duration metric: took 169.025303ms to wait for pod list to return data ...
	I1207 21:21:42.485287   51037 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:42.680542   51037 default_sa.go:45] found service account: "default"
	I1207 21:21:42.680569   51037 default_sa.go:55] duration metric: took 195.272707ms for default service account to be created ...
	I1207 21:21:42.680577   51037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:42.890877   51037 system_pods.go:86] 9 kube-system pods found
	I1207 21:21:42.890927   51037 system_pods.go:89] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.890933   51037 system_pods.go:89] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.890938   51037 system_pods.go:89] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.890942   51037 system_pods.go:89] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.890946   51037 system_pods.go:89] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.890950   51037 system_pods.go:89] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.890954   51037 system_pods.go:89] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.890960   51037 system_pods.go:89] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.890965   51037 system_pods.go:89] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.890973   51037 system_pods.go:126] duration metric: took 210.38383ms to wait for k8s-apps to be running ...
	I1207 21:21:42.890979   51037 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:42.891021   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:42.907279   51037 system_svc.go:56] duration metric: took 16.290689ms WaitForService to wait for kubelet.
	I1207 21:21:42.907306   51037 kubeadm.go:581] duration metric: took 5.749318034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:42.907328   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:43.081361   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:43.081390   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:43.081401   51037 node_conditions.go:105] duration metric: took 174.067442ms to run NodePressure ...
	I1207 21:21:43.081412   51037 start.go:228] waiting for startup goroutines ...
	I1207 21:21:43.081420   51037 start.go:233] waiting for cluster config update ...
	I1207 21:21:43.081433   51037 start.go:242] writing updated cluster config ...
	I1207 21:21:43.081691   51037 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:43.131409   51037 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1207 21:21:43.133483   51037 out.go:177] * Done! kubectl is now configured to use "no-preload-950431" cluster and "default" namespace by default
	I1207 21:21:40.414676   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:42.913795   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:44.914599   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:47.414431   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:49.913391   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:51.914426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:53.915196   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:55.923342   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:58.413783   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:00.414241   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:02.414435   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:04.913358   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:06.913909   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:08.915098   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:11.414320   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:13.414489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:15.913521   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:18.415215   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:19.107244   50270 pod_ready.go:81] duration metric: took 4m0.000150933s waiting for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:19.107300   50270 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:22:19.107323   50270 pod_ready.go:38] duration metric: took 4m1.199790563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:19.107355   50270 kubeadm.go:640] restartCluster took 5m20.261390035s
	W1207 21:22:19.107437   50270 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:22:19.107470   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:22:26.124587   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (7.017092462s)
	I1207 21:22:26.124664   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:26.139323   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:22:26.150243   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:22:26.164289   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:22:26.164356   50270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 21:22:26.390137   50270 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:22:39.046001   50270 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1207 21:22:39.046063   50270 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:22:39.046164   50270 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:22:39.046322   50270 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:22:39.046454   50270 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:22:39.046581   50270 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:22:39.046685   50270 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:22:39.046759   50270 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1207 21:22:39.046836   50270 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:22:39.048426   50270 out.go:204]   - Generating certificates and keys ...
	I1207 21:22:39.048532   50270 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:22:39.048617   50270 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:22:39.048713   50270 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:22:39.048808   50270 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:22:39.048899   50270 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:22:39.048977   50270 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:22:39.049066   50270 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:22:39.049151   50270 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:22:39.049254   50270 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:22:39.049341   50270 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:22:39.049396   50270 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:22:39.049496   50270 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:22:39.049578   50270 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:22:39.049671   50270 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:22:39.049758   50270 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:22:39.049829   50270 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:22:39.049884   50270 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:22:39.051499   50270 out.go:204]   - Booting up control plane ...
	I1207 21:22:39.051604   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:22:39.051706   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:22:39.051778   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:22:39.051841   50270 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:22:39.052043   50270 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:22:39.052137   50270 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502878 seconds
	I1207 21:22:39.052296   50270 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:22:39.052458   50270 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:22:39.052537   50270 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:22:39.052714   50270 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-483745 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 21:22:39.052802   50270 kubeadm.go:322] [bootstrap-token] Using token: 88595b.vk24k0k7lcyxvxlg
	I1207 21:22:39.054142   50270 out.go:204]   - Configuring RBAC rules ...
	I1207 21:22:39.054250   50270 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:22:39.054369   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:22:39.054470   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:22:39.054565   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:22:39.054675   50270 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:22:39.054740   50270 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:22:39.054805   50270 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:22:39.054813   50270 kubeadm.go:322] 
	I1207 21:22:39.054905   50270 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:22:39.054917   50270 kubeadm.go:322] 
	I1207 21:22:39.054996   50270 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:22:39.055004   50270 kubeadm.go:322] 
	I1207 21:22:39.055031   50270 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:22:39.055107   50270 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:22:39.055174   50270 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:22:39.055187   50270 kubeadm.go:322] 
	I1207 21:22:39.055254   50270 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:22:39.055366   50270 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:22:39.055467   50270 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:22:39.055476   50270 kubeadm.go:322] 
	I1207 21:22:39.055565   50270 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1207 21:22:39.055655   50270 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:22:39.055663   50270 kubeadm.go:322] 
	I1207 21:22:39.055776   50270 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.055929   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:22:39.055969   50270 kubeadm.go:322]     --control-plane 	  
	I1207 21:22:39.055979   50270 kubeadm.go:322] 
	I1207 21:22:39.056099   50270 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:22:39.056111   50270 kubeadm.go:322] 
	I1207 21:22:39.056215   50270 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.056371   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:22:39.056402   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:22:39.056414   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:22:39.058073   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:22:39.059659   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:22:39.078052   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:22:39.118479   50270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:22:39.118540   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=old-k8s-version-483745 minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.118551   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.149391   50270 ops.go:34] apiserver oom_adj: -16
	I1207 21:22:39.334606   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.476182   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.075027   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.574693   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.074497   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.575214   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.075168   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.575162   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.074671   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.575406   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.074823   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.574597   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.575119   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.075437   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.575138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.575171   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.074939   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.574679   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.075065   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.574571   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.074553   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.575129   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.075320   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.574806   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.075136   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.575144   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.075139   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.575394   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.075185   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.274051   50270 kubeadm.go:1088] duration metric: took 15.155559482s to wait for elevateKubeSystemPrivileges.
	I1207 21:22:54.274092   50270 kubeadm.go:406] StartCluster complete in 5m55.488226201s
	I1207 21:22:54.274140   50270 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.274247   50270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:22:54.276679   50270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.276902   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:22:54.276991   50270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:22:54.277064   50270 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277090   50270 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-483745"
	W1207 21:22:54.277103   50270 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:22:54.277101   50270 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277089   50270 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277116   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:22:54.277127   50270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-483745"
	I1207 21:22:54.277152   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277119   50270 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-483745"
	W1207 21:22:54.277169   50270 addons.go:240] addon metrics-server should already be in state true
	I1207 21:22:54.277208   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277529   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277564   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277573   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277581   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277591   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277612   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.293696   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1207 21:22:54.293908   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1207 21:22:54.294118   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.294622   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.294642   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.294656   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.295100   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.295119   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.295182   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295512   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295671   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.295709   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1207 21:22:54.295752   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.295791   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.296131   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.296662   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.296681   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.297077   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.297597   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.297635   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.299605   50270 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-483745"
	W1207 21:22:54.299630   50270 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:22:54.299658   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.300047   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.300087   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.314531   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1207 21:22:54.315168   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.315718   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.315804   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1207 21:22:54.315809   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.316447   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.316491   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.316657   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.316979   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.317005   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.317340   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.317887   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.317945   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.319086   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.321272   50270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:22:54.320074   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1207 21:22:54.322834   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:22:54.322849   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:22:54.322863   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.323218   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.323677   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.323689   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.323997   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.324166   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.326460   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.328172   50270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:22:54.327148   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.328366   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.329567   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.329588   50270 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.329593   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.329600   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:22:54.329613   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.329725   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.329909   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.330088   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.333435   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334161   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.334192   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334480   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.334786   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.334959   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.335091   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.336340   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I1207 21:22:54.336672   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.337021   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.337034   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.337316   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.337486   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.338808   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.339043   50270 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.339053   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:22:54.339064   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.341591   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.341937   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.341960   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.342127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.342285   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.342453   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.342592   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.385908   50270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-483745" context rescaled to 1 replicas
	I1207 21:22:54.385959   50270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:22:54.387637   50270 out.go:177] * Verifying Kubernetes components...
	I1207 21:22:54.388616   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:54.604286   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.671574   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:22:54.671601   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:22:54.752688   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:22:54.752714   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:22:54.792943   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.847458   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.847489   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:22:54.916698   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.931860   50270 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:54.931924   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:22:55.152010   50270 node_ready.go:49] node "old-k8s-version-483745" has status "Ready":"True"
	I1207 21:22:55.152041   50270 node_ready.go:38] duration metric: took 220.147741ms waiting for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:55.152055   50270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:55.356283   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:55.654243   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049922238s)
	I1207 21:22:55.654296   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654313   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.654661   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.654687   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.654694   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.654703   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654715   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.655010   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.655052   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.693855   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.693876   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.694176   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.694197   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.927642   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13465835s)
	I1207 21:22:55.927714   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.927731   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928056   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928076   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.928087   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.928096   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.928413   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928428   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.033797   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117050773s)
	I1207 21:22:56.033845   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.101898699s)
	I1207 21:22:56.033881   50270 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1207 21:22:56.033850   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.033918   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034207   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034220   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034229   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.034236   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034460   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034480   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034516   50270 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-483745"
	I1207 21:22:56.036701   50270 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1207 21:22:56.038078   50270 addons.go:502] enable addons completed in 1.76109636s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1207 21:22:57.718454   50270 pod_ready.go:102] pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:58.708880   50270 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708910   50270 pod_ready.go:81] duration metric: took 3.352602717s waiting for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:58.708920   50270 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708930   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715179   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.715205   50270 pod_ready.go:81] duration metric: took 6.268335ms waiting for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715219   50270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720511   50270 pod_ready.go:92] pod "kube-proxy-42fzb" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.720526   50270 pod_ready.go:81] duration metric: took 5.302238ms waiting for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720544   50270 pod_ready.go:38] duration metric: took 3.568467628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:58.720558   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:22:58.720609   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:22:58.737687   50270 api_server.go:72] duration metric: took 4.351680673s to wait for apiserver process to appear ...
	I1207 21:22:58.737712   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:22:58.737730   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:22:58.744722   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:22:58.745867   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:22:58.745887   50270 api_server.go:131] duration metric: took 8.167725ms to wait for apiserver health ...
	I1207 21:22:58.745897   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:22:58.750259   50270 system_pods.go:59] 4 kube-system pods found
	I1207 21:22:58.750278   50270 system_pods.go:61] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.750283   50270 system_pods.go:61] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.750292   50270 system_pods.go:61] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.750306   50270 system_pods.go:61] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.750319   50270 system_pods.go:74] duration metric: took 4.415504ms to wait for pod list to return data ...
	I1207 21:22:58.750328   50270 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:22:58.753151   50270 default_sa.go:45] found service account: "default"
	I1207 21:22:58.753173   50270 default_sa.go:55] duration metric: took 2.836309ms for default service account to be created ...
	I1207 21:22:58.753181   50270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:22:58.757164   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.757188   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.757195   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.757212   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.757223   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.757246   50270 retry.go:31] will retry after 195.542562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:58.957411   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.957443   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.957451   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.957461   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.957471   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.957494   50270 retry.go:31] will retry after 294.291725ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.264559   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.264599   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.264608   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.264620   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.264632   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.264651   50270 retry.go:31] will retry after 392.704433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.663939   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.663967   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.663973   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.663979   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.663985   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.664003   50270 retry.go:31] will retry after 598.787872ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.268415   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.268441   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.268447   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.268453   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.268458   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.268472   50270 retry.go:31] will retry after 554.6659ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.829267   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.829293   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.829299   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.829305   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.829309   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.829325   50270 retry.go:31] will retry after 832.708436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:01.667497   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:01.667526   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:01.667532   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:01.667539   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:01.667543   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:01.667560   50270 retry.go:31] will retry after 824.504206ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:02.497009   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:02.497033   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:02.497038   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:02.497045   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:02.497049   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:02.497064   50270 retry.go:31] will retry after 1.335460815s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:03.837788   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:03.837816   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:03.837821   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:03.837828   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:03.837833   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:03.837848   50270 retry.go:31] will retry after 1.185883705s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:05.028679   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:05.028712   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:05.028721   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:05.028731   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:05.028738   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:05.028758   50270 retry.go:31] will retry after 2.162817833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:07.196435   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:07.196468   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:07.196476   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:07.196485   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:07.196493   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:07.196512   50270 retry.go:31] will retry after 2.853202831s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:10.054277   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:10.054303   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:10.054308   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:10.054315   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:10.054320   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:10.054335   50270 retry.go:31] will retry after 3.392213767s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:13.452019   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:13.452046   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:13.452052   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:13.452059   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:13.452064   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:13.452081   50270 retry.go:31] will retry after 3.42315118s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:16.882830   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:16.882856   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:16.882861   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:16.882868   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:16.882873   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:16.882887   50270 retry.go:31] will retry after 3.42232982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:20.310740   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:20.310766   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:20.310771   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:20.310780   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:20.310785   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:20.310801   50270 retry.go:31] will retry after 6.110306117s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:26.426492   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:26.426520   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:26.426525   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:26.426532   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:26.426537   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:26.426554   50270 retry.go:31] will retry after 5.458076236s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:31.890544   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:31.890575   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:31.890580   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:31.890589   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:31.890593   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:31.890611   50270 retry.go:31] will retry after 10.030622922s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:41.928589   50270 system_pods.go:86] 6 kube-system pods found
	I1207 21:23:41.928622   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:41.928630   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:41.928637   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:41.928642   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:41.928651   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:41.928659   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:41.928677   50270 retry.go:31] will retry after 11.183539963s: missing components: kube-controller-manager, kube-scheduler
	I1207 21:23:53.119257   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:23:53.119284   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:53.119292   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:53.119298   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:53.119304   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Pending
	I1207 21:23:53.119309   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:53.119315   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:23:53.119325   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:53.119332   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:53.119353   50270 retry.go:31] will retry after 13.123307809s: missing components: kube-controller-manager
	I1207 21:24:06.249016   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:24:06.249042   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:24:06.249048   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:24:06.249054   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:24:06.249059   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Running
	I1207 21:24:06.249064   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:24:06.249068   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:24:06.249074   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:24:06.249079   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:24:06.249087   50270 system_pods.go:126] duration metric: took 1m7.495900916s to wait for k8s-apps to be running ...
	I1207 21:24:06.249092   50270 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:24:06.249137   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:24:06.265801   50270 system_svc.go:56] duration metric: took 16.700976ms WaitForService to wait for kubelet.
	I1207 21:24:06.265820   50270 kubeadm.go:581] duration metric: took 1m11.879821949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:24:06.265837   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:24:06.269326   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:24:06.269346   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:24:06.269356   50270 node_conditions.go:105] duration metric: took 3.51576ms to run NodePressure ...
	I1207 21:24:06.269366   50270 start.go:228] waiting for startup goroutines ...
	I1207 21:24:06.269371   50270 start.go:233] waiting for cluster config update ...
	I1207 21:24:06.269384   50270 start.go:242] writing updated cluster config ...
	I1207 21:24:06.269660   50270 ssh_runner.go:195] Run: rm -f paused
	I1207 21:24:06.317992   50270 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1207 21:24:06.320122   50270 out.go:177] 
	W1207 21:24:06.321437   50270 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1207 21:24:06.322708   50270 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1207 21:24:06.324092   50270 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-483745" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:16:17 UTC, ends at Thu 2023-12-07 21:30:27 UTC. --
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.114723232Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-drrlk,Uid:abdd350f-1ec9-42f2-aac8-63015e2f22c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983830948080009,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056793786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&PodSandboxMetadata{Name:busybox,Uid:40929895-a56a-4b7c-8f5e-2bf0e8711984,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1701983830937785084,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056788836Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b56874d5505edee332f2ca542f4e1deb15c53c789076044bd4eee06efaf96660,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-qvq95,Uid:ff9eb289-7fe2-4d11-a369-12b1c34a1937,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983823133519723,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-qvq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9eb289-7fe2-4d11-a369-12b1c34a1937,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07
T21:16:55.056798955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&PodSandboxMetadata{Name:kube-proxy-nmx2z,Uid:1f466e5e-a6b2-4413-b456-7a90bc120735,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983815414917310,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f466e5e-a6b2-4413-b456-7a90bc120735,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056797716Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983815408850629,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-12-07T21:16:55.056802081Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-275828,Uid:62ff3fe476a4d19df3c21e4eeff661f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807612468927,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.254:8444,kubernetes.io/config.hash: 62ff3fe476a4d19df3c21e4eeff661f5,kubernetes.io/config.seen: 2023-12-07T21:16:47.054749810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748
f0,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-275828,Uid:63722c9beb08c64e87aca0ac5a03a3b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807582806327,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.254:2379,kubernetes.io/config.hash: 63722c9beb08c64e87aca0ac5a03a3b3,kubernetes.io/config.seen: 2023-12-07T21:16:47.054741734Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-275828,Uid:d5571fbf22464376953aac83f089be6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807574130885,Labels:map[string]str
ing{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d5571fbf22464376953aac83f089be6f,kubernetes.io/config.seen: 2023-12-07T21:16:47.054740046Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-275828,Uid:5048811c4e837144b51e5bb09fb52972,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807569892363,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5048811c4e837144b51e5bb09fb52972,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 5048811c4e837144b51e5bb09fb52972,kubernetes.io/config.seen: 2023-12-07T21:16:47.054733852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=33f15977-7e71-47a0-b1e4-e27234d33a49 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.115539465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6bf2085d-8bc4-4cb0-8f9f-1fe9c706abb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.115668782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6bf2085d-8bc4-4cb0-8f9f-1fe9c706abb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.115845174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6bf2085d-8bc4-4cb0-8f9f-1fe9c706abb4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.138285728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4a3c2c31-6d6c-4f69-bff9-91fcc969cf5c name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.138374295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4a3c2c31-6d6c-4f69-bff9-91fcc969cf5c name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.139461593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4b05c5e8-a69c-4bcd-ab9e-97d42d334f3c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.140244192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984627140227255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4b05c5e8-a69c-4bcd-ab9e-97d42d334f3c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.140904393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=40c07802-9ff6-4921-afbf-769a71097404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.140979489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=40c07802-9ff6-4921-afbf-769a71097404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.141163348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=40c07802-9ff6-4921-afbf-769a71097404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.180002814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d230c928-43e3-472c-94b9-0f3603167f7d name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.180099232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d230c928-43e3-472c-94b9-0f3603167f7d name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.182520505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=14607dd7-4fbd-4edd-b11c-1c2c4f9938b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.183075700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984627182964843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=14607dd7-4fbd-4edd-b11c-1c2c4f9938b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.183802547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=50512377-aa57-462b-914d-95d57cf96379 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.183876019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=50512377-aa57-462b-914d-95d57cf96379 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.184168044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=50512377-aa57-462b-914d-95d57cf96379 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.221059795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9196a4ed-f402-43d7-a8e6-cede166fd9f7 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.221148212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9196a4ed-f402-43d7-a8e6-cede166fd9f7 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.222777655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=beccc778-37a0-415b-81aa-1f1f6a651c52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.223230695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984627223215460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=beccc778-37a0-415b-81aa-1f1f6a651c52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.224322632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c7066fd-2f97-477a-839e-7c177be15686 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.224400518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c7066fd-2f97-477a-839e-7c177be15686 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:27 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:30:27.224758741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c7066fd-2f97-477a-839e-7c177be15686 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d19830626a12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   e6ba8d1d11e85       storage-provisioner
	bcf3f61d8578c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3356df4ab45c2       busybox
	5a99c774cf004       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   0a6c420b1a817       coredns-5dd5756b68-drrlk
	40b29d34e8a9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   e6ba8d1d11e85       storage-provisioner
	e5f03abdf541c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   88d55f09318db       kube-proxy-nmx2z
	333f8e7b3b0ba       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   6e5018b28ed3d       etcd-default-k8s-diff-port-275828
	3d55aee82d6e7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   08021fa635918       kube-scheduler-default-k8s-diff-port-275828
	0127dcb687572       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   d061d98cecec5       kube-apiserver-default-k8s-diff-port-275828
	2dfc84b682d89       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   e6cdd84c0e0b0       kube-controller-manager-default-k8s-diff-port-275828
	
	* 
	* ==> coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58782 - 47439 "HINFO IN 411158688030276708.324194747714498229. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010667086s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-275828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-275828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=default-k8s-diff-port-275828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_09_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:09:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-275828
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:30:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:27:38 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:27:38 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:27:38 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:27:38 +0000   Thu, 07 Dec 2023 21:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    default-k8s-diff-port-275828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 893c2f24b7204674972dc2ee75339e3b
	  System UUID:                893c2f24-b720-4674-972d-c2ee75339e3b
	  Boot ID:                    94a71f66-7149-4dfc-9904-3e6c7e919bc9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-drrlk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-275828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-275828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-275828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-nmx2z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-275828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-qvq95                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-275828 event: Registered Node default-k8s-diff-port-275828 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-275828 event: Registered Node default-k8s-diff-port-275828 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070321] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.834581] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149351] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.544734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.875963] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.108835] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.163315] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.114013] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.242982] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +17.679029] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[Dec 7 21:17] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] <==
	* {"level":"info","ts":"2023-12-07T21:16:52.861885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a received MsgPreVoteResp from 9b8de1e5bd82ef2a at term 2"}
	{"level":"info","ts":"2023-12-07T21:16:52.861972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T21:16:52.862014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a received MsgVoteResp from 9b8de1e5bd82ef2a at term 3"}
	{"level":"info","ts":"2023-12-07T21:16:52.862059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became leader at term 3"}
	{"level":"info","ts":"2023-12-07T21:16:52.8621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b8de1e5bd82ef2a elected leader 9b8de1e5bd82ef2a at term 3"}
	{"level":"info","ts":"2023-12-07T21:16:52.864388Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b8de1e5bd82ef2a","local-member-attributes":"{Name:default-k8s-diff-port-275828 ClientURLs:[https://192.168.39.254:2379]}","request-path":"/0/members/9b8de1e5bd82ef2a/attributes","cluster-id":"7053bcffcda7710c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:16:52.864458Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:16:52.865785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:16:52.866878Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.254:2379"}
	{"level":"info","ts":"2023-12-07T21:16:52.867341Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:16:52.883728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:16:52.883843Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-12-07T21:16:57.122402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"816.216079ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17233741156965570591 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.254\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.39.254\" value_size:67 lease:8010369120110794781 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.254\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-07T21:16:57.126256Z","caller":"traceutil/trace.go:171","msg":"trace[2033525217] linearizableReadLoop","detail":"{readStateIndex:549; appliedIndex:548; }","duration":"450.159768ms","start":"2023-12-07T21:16:56.676074Z","end":"2023-12-07T21:16:57.126234Z","steps":["trace[2033525217] 'read index received'  (duration: 23.713µs)","trace[2033525217] 'applied index is now lower than readState.Index'  (duration: 450.134662ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:16:57.126378Z","caller":"traceutil/trace.go:171","msg":"trace[295227382] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"1.071696716s","start":"2023-12-07T21:16:56.054661Z","end":"2023-12-07T21:16:57.126358Z","steps":["trace[295227382] 'process raft request'  (duration: 250.493921ms)","trace[295227382] 'compare'  (duration: 816.079444ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:16:57.126487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:16:56.05455Z","time spent":"1.071860697s","remote":"127.0.0.1:39836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.254\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.39.254\" value_size:67 lease:8010369120110794781 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.254\" > >"}
	{"level":"warn","ts":"2023-12-07T21:16:57.127011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"450.943591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-12-07T21:16:57.127047Z","caller":"traceutil/trace.go:171","msg":"trace[158886616] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:520; }","duration":"450.990003ms","start":"2023-12-07T21:16:56.676049Z","end":"2023-12-07T21:16:57.127039Z","steps":["trace[158886616] 'agreement among raft nodes before linearized reading'  (duration: 450.246717ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:16:57.127074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:16:56.676035Z","time spent":"451.031854ms","remote":"127.0.0.1:39874","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"warn","ts":"2023-12-07T21:16:57.949345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"702.408707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:39761"}
	{"level":"info","ts":"2023-12-07T21:16:57.949527Z","caller":"traceutil/trace.go:171","msg":"trace[1453040334] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:520; }","duration":"702.605297ms","start":"2023-12-07T21:16:57.24691Z","end":"2023-12-07T21:16:57.949515Z","steps":["trace[1453040334] 'range keys from in-memory index tree'  (duration: 702.106505ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:16:57.949626Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:16:57.246898Z","time spent":"702.71688ms","remote":"127.0.0.1:39870","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":8,"response size":39784,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2023-12-07T21:26:52.909762Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2023-12-07T21:26:52.912864Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":872,"took":"2.58114ms","hash":138780491}
	{"level":"info","ts":"2023-12-07T21:26:52.912986Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":138780491,"revision":872,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:30:27 up 14 min,  0 users,  load average: 0.14, 0.19, 0.18
	Linux default-k8s-diff-port-275828 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] <==
	* I1207 21:26:54.646015       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:26:55.646432       1 handler_proxy.go:93] no RequestInfo found in the context
	W1207 21:26:55.646455       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:26:55.646721       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:26:55.646771       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1207 21:26:55.646820       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:26:55.648801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:27:54.534857       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:27:55.647750       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:27:55.647862       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:27:55.647876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:27:55.649107       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:27:55.649162       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:27:55.649173       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:28:54.534215       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1207 21:29:54.534948       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:29:55.649032       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:29:55.649188       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:29:55.649246       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:29:55.649302       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:29:55.649333       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:29:55.650514       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] <==
	* I1207 21:24:38.127684       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:25:07.660854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:25:08.138291       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:25:37.667014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:25:38.147361       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:26:07.672793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:08.158471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:26:37.679045       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:38.167478       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:27:07.687397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:08.178773       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:27:37.693183       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:38.187191       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:28:07.700053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:08.199624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:28:15.127411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="416.64µs"
	I1207 21:28:30.119921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.203µs"
	E1207 21:28:37.705485       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:38.208119       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:07.712423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:08.217034       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:37.718553       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:38.225864       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:30:07.725878       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:30:08.236966       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] <==
	* I1207 21:16:58.611922       1 server_others.go:69] "Using iptables proxy"
	I1207 21:16:58.658053       1 node.go:141] Successfully retrieved node IP: 192.168.39.254
	I1207 21:16:58.767484       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:16:58.767669       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:16:58.776064       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:16:58.776175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:16:58.781493       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:16:58.781780       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:16:58.783148       1 config.go:188] "Starting service config controller"
	I1207 21:16:58.783992       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:16:58.784724       1 config.go:315] "Starting node config controller"
	I1207 21:16:58.791737       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:16:58.784918       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:16:58.795071       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:16:58.795213       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:16:58.884761       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:16:58.891921       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] <==
	* W1207 21:16:54.669452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.669497       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.669643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 21:16:54.669689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 21:16:54.669869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:16:54.669927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:16:54.669889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:16:54.669978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:16:54.670126       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.670169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.670261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:16:54.673804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.673856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.673963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:16:54.673997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:16:54.674079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:16:54.674115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 21:16:54.674190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:16:54.674229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:16:54.674271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:16:54.674305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:16:54.673800       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:16:54.674516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.674653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1207 21:16:55.653316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:16:17 UTC, ends at Thu 2023-12-07 21:30:27 UTC. --
	Dec 07 21:27:47 default-k8s-diff-port-275828 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:27:47 default-k8s-diff-port-275828 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:27:47 default-k8s-diff-port-275828 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:28:00 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:00.114214     928 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:28:00 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:00.114258     928 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:28:00 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:00.114631     928 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l9pgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-qvq95_kube-system(ff9eb289-7fe2-4d11-a369-12b1c34a1937): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:28:00 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:00.114693     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:28:15 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:15.102084     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:28:30 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:30.099891     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:28:43 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:43.099698     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:28:47 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:47.115939     928 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:28:47 default-k8s-diff-port-275828 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:28:47 default-k8s-diff-port-275828 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:28:47 default-k8s-diff-port-275828 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:28:57 default-k8s-diff-port-275828 kubelet[928]: E1207 21:28:57.101416     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:29:10 default-k8s-diff-port-275828 kubelet[928]: E1207 21:29:10.099628     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:29:22 default-k8s-diff-port-275828 kubelet[928]: E1207 21:29:22.099448     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:29:37 default-k8s-diff-port-275828 kubelet[928]: E1207 21:29:37.100047     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:29:47 default-k8s-diff-port-275828 kubelet[928]: E1207 21:29:47.116638     928 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:29:47 default-k8s-diff-port-275828 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:29:47 default-k8s-diff-port-275828 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:29:47 default-k8s-diff-port-275828 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:29:49 default-k8s-diff-port-275828 kubelet[928]: E1207 21:29:49.099812     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:30:01 default-k8s-diff-port-275828 kubelet[928]: E1207 21:30:01.101090     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:30:13 default-k8s-diff-port-275828 kubelet[928]: E1207 21:30:13.099350     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	
	* 
	* ==> storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] <==
	* I1207 21:16:58.341252       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 21:17:28.344919       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] <==
	* I1207 21:17:29.449360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:17:29.464945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:17:29.465084       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:17:46.870294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:17:46.870848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77fef67e-71b0-4413-86fa-eb3e04ca573f", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76 became leader
	I1207 21:17:46.870914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76!
	I1207 21:17:46.971949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qvq95
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95: exit status 1 (65.6803ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qvq95" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:23:04.748443   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950431 -n no-preload-950431
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:30:43.728862023 +0000 UTC m=+5367.897006987
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-950431 logs -n 25: (1.585198122s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-620116 -- sudo                         | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-620116                                 | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:12:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:12:54.827966   51113 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:12:54.828121   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828131   51113 out.go:309] Setting ErrFile to fd 2...
	I1207 21:12:54.828138   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828309   51113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:12:54.828894   51113 out.go:303] Setting JSON to false
	I1207 21:12:54.829778   51113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6921,"bootTime":1701976654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:12:54.829872   51113 start.go:138] virtualization: kvm guest
	I1207 21:12:54.832359   51113 out.go:177] * [default-k8s-diff-port-275828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:12:54.833958   51113 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:12:54.833997   51113 notify.go:220] Checking for updates...
	I1207 21:12:54.835484   51113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:12:54.837345   51113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:12:54.838716   51113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:12:54.840105   51113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:12:54.841497   51113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:12:54.843170   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:12:54.843587   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.843638   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.857987   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I1207 21:12:54.858345   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.858826   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.858846   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.859141   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.859317   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.859528   51113 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:12:54.859797   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.859827   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.873523   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1207 21:12:54.873866   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.874374   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.874399   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.874726   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.874907   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.906909   51113 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:12:54.908496   51113 start.go:298] selected driver: kvm2
	I1207 21:12:54.908515   51113 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.908626   51113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:12:54.909287   51113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.909431   51113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:12:54.924711   51113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:12:54.925077   51113 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:12:54.925136   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:12:54.925149   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:12:54.925174   51113 start_flags.go:323] config:
	{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-27582
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.925311   51113 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.927216   51113 out.go:177] * Starting control plane node default-k8s-diff-port-275828 in cluster default-k8s-diff-port-275828
	I1207 21:12:51.859250   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:12:51.859366   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:12:51.859440   51037 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859507   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:12:51.859492   51037 cache.go:107] acquiring lock: {Name:mk57eae37995939df6ffd0df03832314e9e6100e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859493   51037 cache.go:107] acquiring lock: {Name:mk5a91936dc04372c96de7514149d2b4b0d17dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859522   51037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.402µs
	I1207 21:12:51.859538   51037 cache.go:107] acquiring lock: {Name:mk4c716c1104ca016c5e335d1cbf204f19d0197f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859560   51037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:12:51.859581   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 21:12:51.859591   51037 start.go:365] acquiring machines lock for no-preload-950431: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:12:51.859593   51037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 111.482µs
	I1207 21:12:51.859611   51037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859596   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 21:12:51.859564   51037 cache.go:107] acquiring lock: {Name:mke02250ffd1d3b6fb4470dd05093397053b289d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859627   51037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 139.857µs
	I1207 21:12:51.859637   51037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859588   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 21:12:51.859647   51037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 112.196µs
	I1207 21:12:51.859621   51037 cache.go:107] acquiring lock: {Name:mk2a1c8afaf74efaf0daac8bf102ee63aa4b5154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859664   51037 cache.go:107] acquiring lock: {Name:mk042626599761dccdc47fcf8ee95d59d24917b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859660   51037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 21:12:51.859443   51037 cache.go:107] acquiring lock: {Name:mk69e12850117516cff168d811605a739d29808c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859701   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 21:12:51.859715   51037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 185.872µs
	I1207 21:12:51.859736   51037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 21:12:51.859728   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 21:12:51.859750   51037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 313.668µs
	I1207 21:12:51.859758   51037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859796   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 21:12:51.859809   51037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 179.42µs
	I1207 21:12:51.859823   51037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859808   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1207 21:12:51.859910   51037 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 310.345µs
	I1207 21:12:51.859931   51037 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1207 21:12:51.859947   51037 cache.go:87] Successfully saved all images to host disk.
	I1207 21:12:57.714205   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:12:54.928473   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:12:54.928503   51113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:12:54.928516   51113 cache.go:56] Caching tarball of preloaded images
	I1207 21:12:54.928608   51113 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:12:54.928621   51113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:12:54.928718   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:12:54.928893   51113 start.go:365] acquiring machines lock for default-k8s-diff-port-275828: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:13:00.786234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:06.866234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:09.938211   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:16.018206   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:19.090196   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:25.170164   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:28.242299   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:34.322194   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:37.394241   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:43.474183   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:46.546186   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:52.626214   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:55.698176   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:01.778218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:04.850228   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:10.930239   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:14.002222   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:20.082270   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:23.154237   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:29.234226   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:32.306242   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:38.386218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:41.458157   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:47.538219   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:50.610223   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:56.690260   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:59.766215   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:05.842220   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:08.914154   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:14.994193   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:18.066232   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:21.070365   50624 start.go:369] acquired machines lock for "embed-certs-598346" in 3m44.734224905s
	I1207 21:15:21.070421   50624 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:21.070427   50624 fix.go:54] fixHost starting: 
	I1207 21:15:21.070755   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:21.070787   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:21.085298   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I1207 21:15:21.085643   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:21.086150   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:15:21.086172   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:21.086491   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:21.086681   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:21.086828   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:15:21.088256   50624 fix.go:102] recreateIfNeeded on embed-certs-598346: state=Stopped err=<nil>
	I1207 21:15:21.088283   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	W1207 21:15:21.088465   50624 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:21.090020   50624 out.go:177] * Restarting existing kvm2 VM for "embed-certs-598346" ...
	I1207 21:15:21.091364   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Start
	I1207 21:15:21.091521   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:15:21.092215   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:15:21.092551   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:15:21.092938   50624 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:15:21.093647   50624 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:15:21.067977   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:21.068024   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:15:21.070214   50270 machine.go:91] provisioned docker machine in 4m37.409386757s
	I1207 21:15:21.070272   50270 fix.go:56] fixHost completed within 4m37.430493841s
	I1207 21:15:21.070280   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 4m37.43051315s
	W1207 21:15:21.070299   50270 start.go:694] error starting host: provision: host is not running
	W1207 21:15:21.070399   50270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 21:15:21.070408   50270 start.go:709] Will try again in 5 seconds ...
	I1207 21:15:22.319220   50624 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:15:22.320059   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.320432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.320505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.320416   51516 retry.go:31] will retry after 306.732639ms: waiting for machine to come up
	I1207 21:15:22.629026   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.629495   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.629523   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.629465   51516 retry.go:31] will retry after 244.665765ms: waiting for machine to come up
	I1207 21:15:22.875896   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.876248   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.876275   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.876210   51516 retry.go:31] will retry after 389.522298ms: waiting for machine to come up
	I1207 21:15:23.267728   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.268119   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.268140   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.268064   51516 retry.go:31] will retry after 521.34699ms: waiting for machine to come up
	I1207 21:15:23.790614   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.791043   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.791067   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.791002   51516 retry.go:31] will retry after 493.71234ms: waiting for machine to come up
	I1207 21:15:24.286698   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:24.287121   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:24.287145   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:24.287061   51516 retry.go:31] will retry after 736.984501ms: waiting for machine to come up
	I1207 21:15:25.025941   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:25.026294   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:25.026317   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:25.026256   51516 retry.go:31] will retry after 1.06643424s: waiting for machine to come up
	I1207 21:15:26.093760   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:26.094266   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:26.094306   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:26.094211   51516 retry.go:31] will retry after 1.226791228s: waiting for machine to come up
	I1207 21:15:26.072827   50270 start.go:365] acquiring machines lock for old-k8s-version-483745: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:15:27.322536   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:27.322912   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:27.322940   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:27.322857   51516 retry.go:31] will retry after 1.246504696s: waiting for machine to come up
	I1207 21:15:28.571241   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:28.571651   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:28.571677   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:28.571606   51516 retry.go:31] will retry after 2.084958391s: waiting for machine to come up
	I1207 21:15:30.658654   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:30.659047   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:30.659080   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:30.658990   51516 retry.go:31] will retry after 2.104944011s: waiting for machine to come up
	I1207 21:15:32.765669   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:32.766136   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:32.766167   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:32.766076   51516 retry.go:31] will retry after 3.05038185s: waiting for machine to come up
	I1207 21:15:35.819082   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:35.819446   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:35.819477   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:35.819399   51516 retry.go:31] will retry after 3.445969037s: waiting for machine to come up
	I1207 21:15:40.686593   51037 start.go:369] acquired machines lock for "no-preload-950431" in 2m48.82697748s
	I1207 21:15:40.686639   51037 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:40.686646   51037 fix.go:54] fixHost starting: 
	I1207 21:15:40.687011   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:40.687043   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:40.703294   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I1207 21:15:40.703682   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:40.704245   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:15:40.704276   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:40.704620   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:40.704792   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:15:40.704938   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:15:40.706394   51037 fix.go:102] recreateIfNeeded on no-preload-950431: state=Stopped err=<nil>
	I1207 21:15:40.706420   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	W1207 21:15:40.706593   51037 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:40.709148   51037 out.go:177] * Restarting existing kvm2 VM for "no-preload-950431" ...
	I1207 21:15:39.269367   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269776   50624 main.go:141] libmachine: (embed-certs-598346) Found IP for machine: 192.168.72.180
	I1207 21:15:39.269802   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has current primary IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269808   50624 main.go:141] libmachine: (embed-certs-598346) Reserving static IP address...
	I1207 21:15:39.270234   50624 main.go:141] libmachine: (embed-certs-598346) Reserved static IP address: 192.168.72.180
	I1207 21:15:39.270265   50624 main.go:141] libmachine: (embed-certs-598346) Waiting for SSH to be available...
	I1207 21:15:39.270279   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.270308   50624 main.go:141] libmachine: (embed-certs-598346) DBG | skip adding static IP to network mk-embed-certs-598346 - found existing host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"}
	I1207 21:15:39.270325   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Getting to WaitForSSH function...
	I1207 21:15:39.272292   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272639   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.272674   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272773   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH client type: external
	I1207 21:15:39.272827   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa (-rw-------)
	I1207 21:15:39.272869   50624 main.go:141] libmachine: (embed-certs-598346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:15:39.272887   50624 main.go:141] libmachine: (embed-certs-598346) DBG | About to run SSH command:
	I1207 21:15:39.272903   50624 main.go:141] libmachine: (embed-certs-598346) DBG | exit 0
	I1207 21:15:39.363326   50624 main.go:141] libmachine: (embed-certs-598346) DBG | SSH cmd err, output: <nil>: 
	I1207 21:15:39.363757   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:15:39.364301   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.366828   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367157   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.367206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367459   50624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:15:39.367693   50624 machine.go:88] provisioning docker machine ...
	I1207 21:15:39.367713   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:39.367918   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368085   50624 buildroot.go:166] provisioning hostname "embed-certs-598346"
	I1207 21:15:39.368104   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368241   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.370443   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.370771   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.370798   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.371044   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.371192   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371358   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371507   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.371660   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.372058   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.372078   50624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-598346 && echo "embed-certs-598346" | sudo tee /etc/hostname
	I1207 21:15:39.498370   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-598346
	
	I1207 21:15:39.498394   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.501284   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501691   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.501711   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501952   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.502135   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502267   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502432   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.502604   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.503052   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.503091   50624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-598346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-598346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-598346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:15:39.625683   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:39.625713   50624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:15:39.625735   50624 buildroot.go:174] setting up certificates
	I1207 21:15:39.625748   50624 provision.go:83] configureAuth start
	I1207 21:15:39.625760   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.626074   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.628753   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629102   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.629125   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629277   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.631206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631478   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.631507   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631632   50624 provision.go:138] copyHostCerts
	I1207 21:15:39.631682   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:15:39.631698   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:15:39.631763   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:15:39.631844   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:15:39.631852   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:15:39.631874   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:15:39.631922   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:15:39.631928   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:15:39.631951   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:15:39.631993   50624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.embed-certs-598346 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-598346]
	I1207 21:15:39.968036   50624 provision.go:172] copyRemoteCerts
	I1207 21:15:39.968098   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:15:39.968121   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.970937   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971356   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.971386   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971627   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.971847   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.972010   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.972148   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.060156   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:15:40.082673   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 21:15:40.104263   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:15:40.125974   50624 provision.go:86] duration metric: configureAuth took 500.211549ms
	I1207 21:15:40.126012   50624 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:15:40.126233   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:15:40.126317   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.129108   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129484   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.129505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129662   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.129884   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130039   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130197   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.130358   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.130677   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.130698   50624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:15:40.439407   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:15:40.439438   50624 machine.go:91] provisioned docker machine in 1.071729841s
	I1207 21:15:40.439451   50624 start.go:300] post-start starting for "embed-certs-598346" (driver="kvm2")
	I1207 21:15:40.439465   50624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:15:40.439504   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.439827   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:15:40.439860   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.442750   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443135   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.443160   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443400   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.443623   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.443811   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.443974   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.531350   50624 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:15:40.535614   50624 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:15:40.535644   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:15:40.535720   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:15:40.535813   50624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:15:40.535938   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:15:40.543981   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:40.566714   50624 start.go:303] post-start completed in 127.248268ms
	I1207 21:15:40.566739   50624 fix.go:56] fixHost completed within 19.496310567s
	I1207 21:15:40.566763   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.569439   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569774   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.569791   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569915   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.570085   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570257   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570386   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.570534   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.570842   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.570855   50624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:15:40.686455   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983740.637211698
	
	I1207 21:15:40.686479   50624 fix.go:206] guest clock: 1701983740.637211698
	I1207 21:15:40.686486   50624 fix.go:219] Guest: 2023-12-07 21:15:40.637211698 +0000 UTC Remote: 2023-12-07 21:15:40.566742665 +0000 UTC m=+244.381466877 (delta=70.469033ms)
	I1207 21:15:40.686503   50624 fix.go:190] guest clock delta is within tolerance: 70.469033ms
	I1207 21:15:40.686508   50624 start.go:83] releasing machines lock for "embed-certs-598346", held for 19.61610992s
	I1207 21:15:40.686533   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.686809   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:40.689665   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690046   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.690069   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690242   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690903   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690988   50624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:15:40.691035   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.691162   50624 ssh_runner.go:195] Run: cat /version.json
	I1207 21:15:40.691196   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.693712   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.693943   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694078   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694106   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694269   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694295   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694333   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694419   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694501   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694580   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694742   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.694816   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694925   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.801618   50624 ssh_runner.go:195] Run: systemctl --version
	I1207 21:15:40.807496   50624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:15:40.967288   50624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:15:40.974223   50624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:15:40.974315   50624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:15:40.988391   50624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:15:40.988418   50624 start.go:475] detecting cgroup driver to use...
	I1207 21:15:40.988510   50624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:15:41.002379   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:15:41.016074   50624 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:15:41.016125   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:15:41.031096   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:15:41.044808   50624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:15:41.150630   50624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:15:40.710656   51037 main.go:141] libmachine: (no-preload-950431) Calling .Start
	I1207 21:15:40.710832   51037 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:15:40.711509   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:15:40.711813   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:15:40.712201   51037 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:15:40.712860   51037 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:15:41.269009   50624 docker.go:219] disabling docker service ...
	I1207 21:15:41.269067   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:15:41.281800   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:15:41.293694   50624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:15:41.413774   50624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:15:41.523960   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:15:41.536474   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:15:41.553611   50624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:15:41.553668   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.562741   50624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:15:41.562831   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.571841   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.580887   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.590259   50624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:15:41.599349   50624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:15:41.607259   50624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:15:41.607314   50624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:15:41.619425   50624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:15:41.627826   50624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:15:41.736815   50624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:15:41.896418   50624 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:15:41.896505   50624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:15:41.901539   50624 start.go:543] Will wait 60s for crictl version
	I1207 21:15:41.901598   50624 ssh_runner.go:195] Run: which crictl
	I1207 21:15:41.905454   50624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:15:41.942196   50624 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:15:41.942267   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:41.986024   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:42.034806   50624 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:15:42.036352   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:42.039304   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039704   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:42.039745   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039930   50624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1207 21:15:42.043951   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:42.056473   50624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:15:42.056535   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:42.099359   50624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:15:42.099459   50624 ssh_runner.go:195] Run: which lz4
	I1207 21:15:42.103324   50624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:15:42.107440   50624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:15:42.107476   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:15:44.063941   50624 crio.go:444] Took 1.960653 seconds to copy over tarball
	I1207 21:15:44.064018   50624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:15:41.955586   51037 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:15:41.956530   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:41.956967   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:41.957004   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:41.956919   51634 retry.go:31] will retry after 266.143384ms: waiting for machine to come up
	I1207 21:15:42.224547   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.225112   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.225142   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.225060   51634 retry.go:31] will retry after 314.364486ms: waiting for machine to come up
	I1207 21:15:42.540722   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.541264   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.541294   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.541225   51634 retry.go:31] will retry after 447.845741ms: waiting for machine to come up
	I1207 21:15:42.990858   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.991283   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.991310   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.991246   51634 retry.go:31] will retry after 494.509595ms: waiting for machine to come up
	I1207 21:15:43.487745   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:43.488268   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:43.488305   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:43.488218   51634 retry.go:31] will retry after 517.471464ms: waiting for machine to come up
	I1207 21:15:44.007846   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.008291   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.008322   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.008247   51634 retry.go:31] will retry after 755.53339ms: waiting for machine to come up
	I1207 21:15:44.765367   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.765799   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.765827   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.765743   51634 retry.go:31] will retry after 947.674862ms: waiting for machine to come up
	I1207 21:15:45.715436   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:45.715859   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:45.715890   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:45.715811   51634 retry.go:31] will retry after 1.304063218s: waiting for machine to come up
	I1207 21:15:47.049597   50624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.985550761s)
	I1207 21:15:47.049622   50624 crio.go:451] Took 2.985655 seconds to extract the tarball
	I1207 21:15:47.049632   50624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:15:47.089358   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:47.145982   50624 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:15:47.146007   50624 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:15:47.146069   50624 ssh_runner.go:195] Run: crio config
	I1207 21:15:47.205864   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:15:47.205888   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:15:47.205904   50624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:15:47.205933   50624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-598346 NodeName:embed-certs-598346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:15:47.206106   50624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-598346"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:15:47.206189   50624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-598346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:15:47.206249   50624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:15:47.214998   50624 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:15:47.215065   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:15:47.223252   50624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 21:15:47.239698   50624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:15:47.258476   50624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1207 21:15:47.275957   50624 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1207 21:15:47.279689   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:47.295204   50624 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346 for IP: 192.168.72.180
	I1207 21:15:47.295234   50624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:15:47.295391   50624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:15:47.295436   50624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:15:47.295501   50624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/client.key
	I1207 21:15:47.295552   50624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key.379caec1
	I1207 21:15:47.295589   50624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key
	I1207 21:15:47.295686   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:15:47.295712   50624 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:15:47.295722   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:15:47.295748   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:15:47.295772   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:15:47.295795   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:15:47.295835   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:47.296438   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:15:47.324057   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:15:47.350921   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:15:47.378603   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:15:47.405443   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:15:47.429942   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:15:47.455437   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:15:47.478735   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:15:47.503326   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:15:47.525886   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:15:47.549414   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:15:47.572018   50624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:15:47.590990   50624 ssh_runner.go:195] Run: openssl version
	I1207 21:15:47.597874   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:15:47.610087   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615875   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615949   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.622941   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:15:47.632217   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:15:47.641323   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645877   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645955   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.651452   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:15:47.660848   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:15:47.670225   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674620   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674670   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.680118   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:15:47.689444   50624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:15:47.693775   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:15:47.699741   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:15:47.705442   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:15:47.710938   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:15:47.716367   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:15:47.721958   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:15:47.727403   50624 kubeadm.go:404] StartCluster: {Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:15:47.727520   50624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:15:47.727599   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:47.771682   50624 cri.go:89] found id: ""
	I1207 21:15:47.771763   50624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:15:47.782923   50624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:15:47.782946   50624 kubeadm.go:636] restartCluster start
	I1207 21:15:47.783020   50624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:15:47.791494   50624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.792645   50624 kubeconfig.go:92] found "embed-certs-598346" server: "https://192.168.72.180:8443"
	I1207 21:15:47.794953   50624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:15:47.804014   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.804096   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.815412   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.815433   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.815503   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.825646   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.326356   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.326438   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.338771   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.826334   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.826405   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.837498   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.325998   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.326084   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.338197   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.825701   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.825821   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.842649   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.326181   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.326277   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.341560   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.826087   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.826183   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.841186   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.021061   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:47.021495   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:47.021519   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:47.021459   51634 retry.go:31] will retry after 1.183999845s: waiting for machine to come up
	I1207 21:15:48.206768   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:48.207222   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:48.207250   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:48.207183   51634 retry.go:31] will retry after 1.595211966s: waiting for machine to come up
	I1207 21:15:49.804832   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:49.805298   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:49.805328   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:49.805229   51634 retry.go:31] will retry after 2.126345359s: waiting for machine to come up
	I1207 21:15:51.325994   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.326083   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.338573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.826180   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.826253   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.837573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.326115   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.326192   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.336984   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.826590   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.826681   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.837678   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.326205   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.326279   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.337579   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.826047   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.826145   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.840263   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.325765   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.325842   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.337452   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.825969   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.826063   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.837428   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.325968   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.326060   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.337128   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.826749   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.826832   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.838002   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.933915   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:51.934338   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:51.934372   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:51.934279   51634 retry.go:31] will retry after 2.448139802s: waiting for machine to come up
	I1207 21:15:54.384038   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:54.384399   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:54.384425   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:54.384351   51634 retry.go:31] will retry after 3.211975182s: waiting for machine to come up
	I1207 21:15:56.325893   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.326007   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.337698   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:56.825827   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.825964   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.836945   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.326560   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:57.326637   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:57.337299   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.804902   50624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:15:57.804933   50624 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:15:57.804946   50624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:15:57.805023   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:57.846788   50624 cri.go:89] found id: ""
	I1207 21:15:57.846877   50624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:15:57.861513   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:15:57.869730   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:15:57.869781   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877777   50624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877801   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:57.992244   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:58.878385   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.051985   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.136414   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.232261   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:15:59.232358   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.246262   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.760617   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.260132   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.760723   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:57.599056   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:57.599417   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:57.599444   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:57.599382   51634 retry.go:31] will retry after 5.532381184s: waiting for machine to come up
	I1207 21:16:04.442905   51113 start.go:369] acquired machines lock for "default-k8s-diff-port-275828" in 3m9.513966804s
	I1207 21:16:04.442972   51113 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:04.442985   51113 fix.go:54] fixHost starting: 
	I1207 21:16:04.443390   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:04.443434   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:04.460087   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1207 21:16:04.460495   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:04.460991   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:04.461014   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:04.461405   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:04.461582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:04.461705   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:04.463304   51113 fix.go:102] recreateIfNeeded on default-k8s-diff-port-275828: state=Stopped err=<nil>
	I1207 21:16:04.463337   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	W1207 21:16:04.463494   51113 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:04.465895   51113 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-275828" ...
	I1207 21:16:04.467328   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Start
	I1207 21:16:04.467485   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring networks are active...
	I1207 21:16:04.468206   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network default is active
	I1207 21:16:04.468581   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network mk-default-k8s-diff-port-275828 is active
	I1207 21:16:04.468943   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Getting domain xml...
	I1207 21:16:04.469483   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Creating domain...
	I1207 21:16:03.134233   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134762   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134794   51037 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:16:03.134811   51037 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:16:03.135186   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.135209   51037 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:16:03.135230   51037 main.go:141] libmachine: (no-preload-950431) DBG | skip adding static IP to network mk-no-preload-950431 - found existing host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"}
	I1207 21:16:03.135251   51037 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:16:03.135265   51037 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:16:03.137331   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137662   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.137689   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137792   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:16:03.137817   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:16:03.137854   51037 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:03.137871   51037 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:16:03.137890   51037 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:16:03.229593   51037 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:03.230019   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:16:03.230604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.233069   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233426   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.233462   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233661   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:16:03.233837   51037 machine.go:88] provisioning docker machine ...
	I1207 21:16:03.233855   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:03.234081   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234254   51037 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:16:03.234277   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234386   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.236593   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.236859   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.236892   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.237079   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.237243   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237396   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237522   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.237653   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.238000   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.238016   51037 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:16:03.374959   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:16:03.374999   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.377825   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378212   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.378247   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378389   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.378604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378763   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.379041   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.379363   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.379399   51037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:03.510050   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:03.510081   51037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:03.510109   51037 buildroot.go:174] setting up certificates
	I1207 21:16:03.510119   51037 provision.go:83] configureAuth start
	I1207 21:16:03.510130   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.510367   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.512754   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513120   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.513151   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513289   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.515546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.515894   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.515947   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.516066   51037 provision.go:138] copyHostCerts
	I1207 21:16:03.516119   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:03.516138   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:03.516206   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:03.516294   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:03.516303   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:03.516328   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:03.516398   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:03.516406   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:03.516430   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:03.516480   51037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:16:03.662663   51037 provision.go:172] copyRemoteCerts
	I1207 21:16:03.662732   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:03.662756   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.665043   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665344   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.665379   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.665713   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.665887   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.666049   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:03.757956   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:03.782348   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:16:03.806388   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:16:03.831058   51037 provision.go:86] duration metric: configureAuth took 320.927373ms
	I1207 21:16:03.831086   51037 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:03.831264   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:16:03.831365   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.834104   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834489   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.834535   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834703   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.834901   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835087   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835224   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.835370   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.835699   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.835721   51037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:04.154758   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:04.154783   51037 machine.go:91] provisioned docker machine in 920.933844ms
	I1207 21:16:04.154795   51037 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:16:04.154810   51037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:04.154829   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.155148   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:04.155173   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.157776   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158131   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.158163   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158336   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.158560   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.158733   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.158873   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.258325   51037 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:04.262930   51037 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:04.262950   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:04.263011   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:04.263077   51037 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:04.263177   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:04.271602   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:04.303816   51037 start.go:303] post-start completed in 148.990598ms
	I1207 21:16:04.303849   51037 fix.go:56] fixHost completed within 23.617201529s
	I1207 21:16:04.303873   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.306576   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.306930   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.306962   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.307104   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.307326   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307458   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307591   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.307773   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:04.308242   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:04.308260   51037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:04.442724   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983764.388433819
	
	I1207 21:16:04.442748   51037 fix.go:206] guest clock: 1701983764.388433819
	I1207 21:16:04.442757   51037 fix.go:219] Guest: 2023-12-07 21:16:04.388433819 +0000 UTC Remote: 2023-12-07 21:16:04.303852803 +0000 UTC m=+192.597462932 (delta=84.581016ms)
	I1207 21:16:04.442797   51037 fix.go:190] guest clock delta is within tolerance: 84.581016ms
	I1207 21:16:04.442801   51037 start.go:83] releasing machines lock for "no-preload-950431", held for 23.756181397s
	I1207 21:16:04.442827   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.443065   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:04.446137   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446578   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.446612   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446797   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447413   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447656   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447732   51037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:04.447783   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.447902   51037 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:04.447923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.450882   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451025   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451253   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451280   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451470   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451481   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451507   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451654   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.451720   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452043   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.452098   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.452561   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452761   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.565982   51037 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:04.573821   51037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:04.741571   51037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:04.749951   51037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:04.750038   51037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:04.770148   51037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:04.770176   51037 start.go:475] detecting cgroup driver to use...
	I1207 21:16:04.770244   51037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:04.787798   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:04.802346   51037 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:04.802415   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:04.819638   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:04.836910   51037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:04.947330   51037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:05.087698   51037 docker.go:219] disabling docker service ...
	I1207 21:16:05.087794   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:05.104790   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:05.122187   51037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:05.252225   51037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:05.394598   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:05.408596   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:05.429804   51037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:05.429876   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.441617   51037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:05.441700   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.452787   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.462684   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.472827   51037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:05.485493   51037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:05.495282   51037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:05.495367   51037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:05.512972   51037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:05.523817   51037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:05.674940   51037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:05.866827   51037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:05.866913   51037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:05.873044   51037 start.go:543] Will wait 60s for crictl version
	I1207 21:16:05.873109   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:05.878484   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:05.919888   51037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:05.919979   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:05.976795   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:06.034745   51037 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:16:01.260865   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.760580   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.790951   50624 api_server.go:72] duration metric: took 2.55868777s to wait for apiserver process to appear ...
	I1207 21:16:01.790981   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:01.791000   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.338427   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:05.338467   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:05.338483   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.436356   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.436385   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:05.937143   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.943626   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.943656   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.036269   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:06.039546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.039919   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:06.039968   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.040205   51037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:06.044899   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:06.061053   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:16:06.061106   51037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:06.099113   51037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:16:06.099136   51037 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:06.099196   51037 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.099225   51037 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.099246   51037 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:16:06.099283   51037 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.099314   51037 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.099229   51037 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.099419   51037 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.099484   51037 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100960   51037 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.100961   51037 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.101035   51037 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.100973   51037 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.234869   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.272014   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.275605   51037 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:16:06.275659   51037 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.275716   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.295068   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.329385   51037 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:16:06.329435   51037 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.329449   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.329486   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.356701   51037 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:16:06.356744   51037 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.356790   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.382536   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:16:06.389671   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.391917   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.399801   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.399908   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.399980   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.400067   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.409081   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.616824   51037 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:16:06.616864   51037 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:16:06.616876   51037 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.616884   51037 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.616923   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.616930   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.617038   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617075   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1207 21:16:06.617086   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617114   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617122   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617199   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:16:06.617272   51037 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:16:06.617286   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:06.617305   51037 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.617353   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.631975   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.632094   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1207 21:16:06.632181   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.436900   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.457077   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:06.457122   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.936534   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.943658   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:16:06.952206   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:06.952239   50624 api_server.go:131] duration metric: took 5.161250619s to wait for apiserver health ...
	I1207 21:16:06.952251   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:16:06.952259   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:06.954179   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:05.844251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting to get IP...
	I1207 21:16:05.845419   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845793   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845896   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:05.845790   51802 retry.go:31] will retry after 224.053393ms: waiting for machine to come up
	I1207 21:16:06.071071   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071521   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071545   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.071464   51802 retry.go:31] will retry after 272.776477ms: waiting for machine to come up
	I1207 21:16:06.346126   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346739   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346773   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.346683   51802 retry.go:31] will retry after 373.022784ms: waiting for machine to come up
	I1207 21:16:06.721567   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722089   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722115   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.722029   51802 retry.go:31] will retry after 380.100559ms: waiting for machine to come up
	I1207 21:16:07.103408   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103853   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103884   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.103798   51802 retry.go:31] will retry after 473.24776ms: waiting for machine to come up
	I1207 21:16:07.578548   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579087   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.579176   51802 retry.go:31] will retry after 892.826082ms: waiting for machine to come up
	I1207 21:16:08.473531   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474027   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474058   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:08.473989   51802 retry.go:31] will retry after 1.042648737s: waiting for machine to come up
	I1207 21:16:09.518823   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519363   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:09.519213   51802 retry.go:31] will retry after 948.481622ms: waiting for machine to come up
	I1207 21:16:06.955727   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:06.967724   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:06.990163   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:07.001387   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:07.001425   50624 system_pods.go:61] "coredns-5dd5756b68-hlpsb" [c1f9f7db-0741-483c-9e39-d6f0ce4715d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:07.001436   50624 system_pods.go:61] "etcd-embed-certs-598346" [acda3700-87a2-4442-94e6-1d17288e7cee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:07.001446   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [e1439056-061b-4add-a399-c55a816fba70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:07.001456   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [b4c80c36-da2c-4c46-b655-3c6bb2a96ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:07.001466   50624 system_pods.go:61] "kube-proxy-jqhnn" [e2635205-e67a-4b56-a7b4-82fe97b5fe7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:07.001490   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [3b90e1d4-9c0f-46e4-a7b7-5e42717a8b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:07.001499   50624 system_pods.go:61] "metrics-server-57f55c9bc5-sndh4" [9a052ce0-760f-4cfd-a958-971daa14ea02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:07.001511   50624 system_pods.go:61] "storage-provisioner" [bf244954-a1d7-4b51-9085-387e60d02792] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:07.001524   50624 system_pods.go:74] duration metric: took 11.336763ms to wait for pod list to return data ...
	I1207 21:16:07.001538   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:07.007697   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:07.007737   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:07.007752   50624 node_conditions.go:105] duration metric: took 6.207447ms to run NodePressure ...
	I1207 21:16:07.007770   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:07.287760   50624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297260   50624 kubeadm.go:787] kubelet initialised
	I1207 21:16:07.297285   50624 kubeadm.go:788] duration metric: took 9.495153ms waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297296   50624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:07.304800   50624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.313488   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313523   50624 pod_ready.go:81] duration metric: took 8.689063ms waiting for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.313535   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313545   50624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.321603   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321637   50624 pod_ready.go:81] duration metric: took 8.078752ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.321649   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321658   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.333040   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333068   50624 pod_ready.go:81] duration metric: took 11.399287ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.333081   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333089   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.397606   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397632   50624 pod_ready.go:81] duration metric: took 64.53373ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.397642   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397648   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713161   50624 pod_ready.go:92] pod "kube-proxy-jqhnn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:08.713188   50624 pod_ready.go:81] duration metric: took 1.315530906s waiting for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713201   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:10.919896   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:07.059825   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061030   51037 ssh_runner.go:235] Completed: which crictl: (3.443650725s)
	I1207 21:16:10.061121   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:10.061130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (3.443992158s)
	I1207 21:16:10.061160   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1207 21:16:10.061174   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.444033736s)
	I1207 21:16:10.061199   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:16:10.061225   51037 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061245   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1: (3.429236441s)
	I1207 21:16:10.061286   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061294   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061296   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.429094571s)
	I1207 21:16:10.061330   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:16:10.061346   51037 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.001491955s)
	I1207 21:16:10.061361   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061387   51037 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:16:10.061402   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:10.061430   51037 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061469   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:10.469685   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470224   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:10.470187   51802 retry.go:31] will retry after 1.846436384s: waiting for machine to come up
	I1207 21:16:12.319116   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319558   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319590   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:12.319512   51802 retry.go:31] will retry after 1.415005437s: waiting for machine to come up
	I1207 21:16:13.736082   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736599   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736630   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:13.736533   51802 retry.go:31] will retry after 2.499952402s: waiting for machine to come up
	I1207 21:16:13.413966   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:15.414181   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:14.287122   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.225788884s)
	I1207 21:16:14.287166   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1207 21:16:14.287165   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: (4.226018563s)
	I1207 21:16:14.287190   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.287204   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.225706156s)
	I1207 21:16:14.287208   51037 ssh_runner.go:235] Completed: which crictl: (4.225716226s)
	I1207 21:16:14.287294   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287310   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (4.225934747s)
	I1207 21:16:14.287322   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1207 21:16:14.287325   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:14.287270   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1207 21:16:14.287238   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.338957   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:16:14.339087   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:16.589704   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.302291312s)
	I1207 21:16:16.589740   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:16:16.589764   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589777   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.302463063s)
	I1207 21:16:16.589816   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589817   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1207 21:16:16.589887   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.250737859s)
	I1207 21:16:16.589912   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1207 21:16:16.238979   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239340   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239367   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:16.239304   51802 retry.go:31] will retry after 2.478988074s: waiting for machine to come up
	I1207 21:16:18.720359   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720892   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720925   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:18.720840   51802 retry.go:31] will retry after 4.119588433s: waiting for machine to come up
	I1207 21:16:17.913477   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.407386   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:18.407417   50624 pod_ready.go:81] duration metric: took 9.694207323s waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:18.407431   50624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:20.429952   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.142546   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (1.552699587s)
	I1207 21:16:18.142620   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:16:18.142658   51037 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:18.142737   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:20.432330   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.289556402s)
	I1207 21:16:20.432358   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:16:20.432386   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:20.432436   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:22.843120   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843516   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843540   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:22.843470   51802 retry.go:31] will retry after 3.969701228s: waiting for machine to come up
	I1207 21:16:22.431295   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:24.929166   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:22.891954   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.459495307s)
	I1207 21:16:22.891978   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1207 21:16:22.892001   51037 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:22.892056   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:23.742939   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 21:16:23.743011   51037 cache_images.go:123] Successfully loaded all cached images
	I1207 21:16:23.743021   51037 cache_images.go:92] LoadImages completed in 17.643875393s
	I1207 21:16:23.743107   51037 ssh_runner.go:195] Run: crio config
	I1207 21:16:23.802064   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:23.802087   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:23.802106   51037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:23.802128   51037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950431 NodeName:no-preload-950431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:23.802258   51037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:23.802329   51037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-950431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:23.802382   51037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:16:23.813052   51037 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:23.813143   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:23.823249   51037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1207 21:16:23.840999   51037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:16:23.857599   51037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1207 21:16:23.873664   51037 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:23.877208   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:23.888109   51037 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431 for IP: 192.168.50.100
	I1207 21:16:23.888148   51037 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:23.888298   51037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:23.888333   51037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:23.888394   51037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:16:23.888453   51037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key.8f36cd02
	I1207 21:16:23.888490   51037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key
	I1207 21:16:23.888598   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:23.888626   51037 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:23.888638   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:23.888669   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:23.888701   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:23.888725   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:23.888769   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:23.889405   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:23.911313   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:16:23.935796   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:23.960576   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:23.983952   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:24.005755   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:24.027232   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:24.049398   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:24.073975   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:24.097326   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:24.118396   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:24.140590   51037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:24.157287   51037 ssh_runner.go:195] Run: openssl version
	I1207 21:16:24.163079   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:24.173618   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.177973   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.178038   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.183537   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:24.193750   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:24.203836   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208278   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208324   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.213906   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:24.223939   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:24.234037   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238379   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238443   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.243650   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:24.253904   51037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:24.258343   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:24.264011   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:24.269609   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:24.275294   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:24.280969   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:24.286763   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:24.292414   51037 kubeadm.go:404] StartCluster: {Name:no-preload-950431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:24.292505   51037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:24.292565   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:24.342426   51037 cri.go:89] found id: ""
	I1207 21:16:24.342596   51037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:24.353900   51037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:24.353939   51037 kubeadm.go:636] restartCluster start
	I1207 21:16:24.353999   51037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:24.363465   51037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.364722   51037 kubeconfig.go:92] found "no-preload-950431" server: "https://192.168.50.100:8443"
	I1207 21:16:24.367198   51037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:24.378918   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.378971   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.391331   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.391354   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.391393   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.403003   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.903722   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.903814   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.915891   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.403459   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.403568   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.415677   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.903683   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.903765   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.915474   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:26.403146   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.403258   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.414072   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.031043   50270 start.go:369] acquired machines lock for "old-k8s-version-483745" in 1m1.958159244s
	I1207 21:16:28.031117   50270 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:28.031127   50270 fix.go:54] fixHost starting: 
	I1207 21:16:28.031477   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:28.031504   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:28.047757   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1207 21:16:28.048134   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:28.048598   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:16:28.048628   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:28.048962   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:28.049123   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:28.049278   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:16:28.050698   50270 fix.go:102] recreateIfNeeded on old-k8s-version-483745: state=Stopped err=<nil>
	I1207 21:16:28.050716   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	W1207 21:16:28.050943   50270 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:28.053462   50270 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-483745" ...
	I1207 21:16:28.054995   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Start
	I1207 21:16:28.055169   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring networks are active...
	I1207 21:16:28.055803   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network default is active
	I1207 21:16:28.056167   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network mk-old-k8s-version-483745 is active
	I1207 21:16:28.056613   50270 main.go:141] libmachine: (old-k8s-version-483745) Getting domain xml...
	I1207 21:16:28.057267   50270 main.go:141] libmachine: (old-k8s-version-483745) Creating domain...
	I1207 21:16:26.815724   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816306   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Found IP for machine: 192.168.39.254
	I1207 21:16:26.816346   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserving static IP address...
	I1207 21:16:26.816373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has current primary IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816843   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.816874   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserved static IP address: 192.168.39.254
	I1207 21:16:26.816895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | skip adding static IP to network mk-default-k8s-diff-port-275828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"}
	I1207 21:16:26.816916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Getting to WaitForSSH function...
	I1207 21:16:26.816933   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for SSH to be available...
	I1207 21:16:26.819265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819625   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.819654   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819808   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH client type: external
	I1207 21:16:26.819840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa (-rw-------)
	I1207 21:16:26.819880   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:26.819908   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | About to run SSH command:
	I1207 21:16:26.819930   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | exit 0
	I1207 21:16:26.913932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:26.914232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetConfigRaw
	I1207 21:16:26.915043   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:26.917486   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.917899   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.917944   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.918182   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:16:26.918360   51113 machine.go:88] provisioning docker machine ...
	I1207 21:16:26.918380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:26.918587   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918775   51113 buildroot.go:166] provisioning hostname "default-k8s-diff-port-275828"
	I1207 21:16:26.918805   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918971   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:26.921227   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921482   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.921515   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921657   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:26.921818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922006   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922162   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:26.922317   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:26.922695   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:26.922713   51113 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-275828 && echo "default-k8s-diff-port-275828" | sudo tee /etc/hostname
	I1207 21:16:27.066745   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-275828
	
	I1207 21:16:27.066778   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.069493   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.069842   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.069895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.070078   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.070295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070446   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.070824   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.071271   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.071302   51113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-275828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-275828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-275828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:27.206475   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:27.206503   51113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:27.206534   51113 buildroot.go:174] setting up certificates
	I1207 21:16:27.206545   51113 provision.go:83] configureAuth start
	I1207 21:16:27.206553   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:27.206818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:27.209295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209632   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.209666   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209763   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.211882   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212147   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.212176   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212250   51113 provision.go:138] copyHostCerts
	I1207 21:16:27.212306   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:27.212326   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:27.212396   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:27.212501   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:27.212511   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:27.212540   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:27.212617   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:27.212627   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:27.212656   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:27.212728   51113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-275828 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube default-k8s-diff-port-275828]
	I1207 21:16:27.273212   51113 provision.go:172] copyRemoteCerts
	I1207 21:16:27.273291   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:27.273321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.275905   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276185   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.276219   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.276569   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.276703   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.276814   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.371834   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:27.394096   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 21:16:27.416619   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:27.443103   51113 provision.go:86] duration metric: configureAuth took 236.548224ms
	I1207 21:16:27.443127   51113 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:27.443336   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:27.443406   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.446005   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446303   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.446334   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446477   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.446648   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446959   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.447158   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.447600   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.447623   51113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:27.760539   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:27.760582   51113 machine.go:91] provisioned docker machine in 842.207987ms
	I1207 21:16:27.760608   51113 start.go:300] post-start starting for "default-k8s-diff-port-275828" (driver="kvm2")
	I1207 21:16:27.760617   51113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:27.760633   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:27.760993   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:27.761030   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.763527   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.763923   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.763968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.764077   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.764254   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.764386   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.764559   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.860772   51113 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:27.865258   51113 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:27.865285   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:27.865348   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:27.865422   51113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:27.865537   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:27.874901   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:27.896890   51113 start.go:303] post-start completed in 136.257327ms
	I1207 21:16:27.896912   51113 fix.go:56] fixHost completed within 23.453929111s
	I1207 21:16:27.896932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.899422   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899740   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.899780   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.900104   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900400   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.900601   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.900920   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.900935   51113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:28.030917   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983787.976128099
	
	I1207 21:16:28.030936   51113 fix.go:206] guest clock: 1701983787.976128099
	I1207 21:16:28.030943   51113 fix.go:219] Guest: 2023-12-07 21:16:27.976128099 +0000 UTC Remote: 2023-12-07 21:16:27.896915587 +0000 UTC m=+213.119643923 (delta=79.212512ms)
	I1207 21:16:28.030970   51113 fix.go:190] guest clock delta is within tolerance: 79.212512ms
	I1207 21:16:28.030975   51113 start.go:83] releasing machines lock for "default-k8s-diff-port-275828", held for 23.588040931s
	I1207 21:16:28.031003   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.031255   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:28.033864   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034277   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.034318   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034501   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035101   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035283   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035354   51113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:28.035399   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.035519   51113 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:28.035543   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.038353   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038570   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038636   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.038675   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.038993   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039013   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.039035   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.039152   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039189   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.039319   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.039368   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039495   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039619   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.161850   51113 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:28.167540   51113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:28.311477   51113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:28.319102   51113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:28.319177   51113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:28.334118   51113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:28.334138   51113 start.go:475] detecting cgroup driver to use...
	I1207 21:16:28.334187   51113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:28.351563   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:28.364950   51113 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:28.365015   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:28.380367   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:28.396070   51113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:28.504230   51113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:28.634829   51113 docker.go:219] disabling docker service ...
	I1207 21:16:28.634893   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:28.648955   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:28.660615   51113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:28.781577   51113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:28.899307   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:28.912673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:28.931310   51113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:28.931384   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.941006   51113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:28.941083   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.951712   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.963062   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.973981   51113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:28.984828   51113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:28.993884   51113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:28.993992   51113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:29.007812   51113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:29.017781   51113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:29.147958   51113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:29.329720   51113 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:29.329781   51113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:29.336048   51113 start.go:543] Will wait 60s for crictl version
	I1207 21:16:29.336109   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:16:29.340075   51113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:29.378207   51113 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:29.378289   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.438034   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.487899   51113 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:16:29.489336   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:29.492387   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.492824   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:29.492858   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.493105   51113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:29.497882   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:29.510857   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:16:29.510910   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:29.557513   51113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:16:29.557590   51113 ssh_runner.go:195] Run: which lz4
	I1207 21:16:29.561849   51113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:29.566351   51113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:29.566383   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:16:26.930512   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:29.442726   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:26.903645   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.903716   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.915728   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.403874   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.403939   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.415501   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.904082   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.904150   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.916404   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.404050   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.404143   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.416757   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.903144   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.903202   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.914709   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.403236   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.403324   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.415595   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.903823   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.903908   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.920093   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.403786   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.403864   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.417374   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.903246   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.903335   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.916333   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:31.403909   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.403984   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.418792   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.352362   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting to get IP...
	I1207 21:16:29.353395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.353871   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.353965   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.353847   51971 retry.go:31] will retry after 307.502031ms: waiting for machine to come up
	I1207 21:16:29.663412   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.663958   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.663990   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.663889   51971 retry.go:31] will retry after 328.013518ms: waiting for machine to come up
	I1207 21:16:29.993550   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.994129   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.994160   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.994066   51971 retry.go:31] will retry after 315.323859ms: waiting for machine to come up
	I1207 21:16:30.310570   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.311106   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.311139   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.311055   51971 retry.go:31] will retry after 547.317149ms: waiting for machine to come up
	I1207 21:16:30.859753   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.860500   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.860532   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.860479   51971 retry.go:31] will retry after 591.81737ms: waiting for machine to come up
	I1207 21:16:31.453939   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:31.454481   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:31.454508   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:31.454426   51971 retry.go:31] will retry after 818.736684ms: waiting for machine to come up
	I1207 21:16:32.274582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:32.275065   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:32.275100   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:32.275018   51971 retry.go:31] will retry after 865.865666ms: waiting for machine to come up
	I1207 21:16:33.142356   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:33.142713   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:33.142748   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:33.142655   51971 retry.go:31] will retry after 1.270743306s: waiting for machine to come up
	I1207 21:16:31.473652   51113 crio.go:444] Took 1.911834 seconds to copy over tarball
	I1207 21:16:31.473729   51113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:34.448164   51113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974406678s)
	I1207 21:16:34.448185   51113 crio.go:451] Took 2.974507 seconds to extract the tarball
	I1207 21:16:34.448196   51113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:34.493579   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:34.555669   51113 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:16:34.555694   51113 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:16:34.555760   51113 ssh_runner.go:195] Run: crio config
	I1207 21:16:34.637813   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:34.637855   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:34.637874   51113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:34.637909   51113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-275828 NodeName:default-k8s-diff-port-275828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:34.638088   51113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-275828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:34.638186   51113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-275828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1207 21:16:34.638255   51113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:16:34.651147   51113 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:34.651264   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:34.660855   51113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1207 21:16:34.678841   51113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:34.696338   51113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1207 21:16:34.718058   51113 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:34.722640   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:34.737097   51113 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828 for IP: 192.168.39.254
	I1207 21:16:34.737138   51113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:34.737316   51113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:34.737367   51113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:34.737459   51113 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.key
	I1207 21:16:34.737557   51113 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key.9e1cae77
	I1207 21:16:34.737614   51113 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key
	I1207 21:16:34.737745   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:34.737783   51113 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:34.737799   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:34.737835   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:34.737870   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:34.737904   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:34.737976   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:34.738542   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:34.768389   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:34.801112   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:31.931027   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:34.430620   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:31.903642   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.903781   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.919330   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.403857   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.403949   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.419078   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.903477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.903561   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.918946   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.403477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.403605   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.416411   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.903561   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.903690   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.915554   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:34.379314   51037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:34.379347   51037 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:34.379361   51037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:34.379450   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:34.427182   51037 cri.go:89] found id: ""
	I1207 21:16:34.427255   51037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:34.448141   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:34.462411   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:34.462494   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474410   51037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474442   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:34.646144   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.548212   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.745964   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.818060   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.899490   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:35.899616   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:35.916336   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:36.432466   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:34.415333   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:34.415908   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:34.415935   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:34.415819   51971 retry.go:31] will retry after 1.846003214s: waiting for machine to come up
	I1207 21:16:36.262900   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:36.263321   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:36.263343   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:36.263283   51971 retry.go:31] will retry after 1.858599877s: waiting for machine to come up
	I1207 21:16:38.124144   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:38.124669   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:38.124701   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:38.124622   51971 retry.go:31] will retry after 2.443451278s: waiting for machine to come up
	I1207 21:16:34.830966   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:35.094040   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:35.121234   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:35.148659   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:35.176938   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:35.206320   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:35.234907   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:35.261034   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:35.286500   51113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:35.306742   51113 ssh_runner.go:195] Run: openssl version
	I1207 21:16:35.314676   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:35.325752   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332066   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332147   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.339606   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:35.350274   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:35.360328   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365516   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365593   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.371482   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:35.381328   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:35.391869   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.396986   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.397051   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.402939   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:35.413428   51113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:35.419598   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:35.427748   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:35.435492   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:35.442272   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:35.450180   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:35.459639   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:35.467615   51113 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:35.467736   51113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:35.467793   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:35.504593   51113 cri.go:89] found id: ""
	I1207 21:16:35.504685   51113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:35.514155   51113 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:35.514182   51113 kubeadm.go:636] restartCluster start
	I1207 21:16:35.514255   51113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:35.525515   51113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.526798   51113 kubeconfig.go:92] found "default-k8s-diff-port-275828" server: "https://192.168.39.254:8444"
	I1207 21:16:35.529447   51113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:35.540876   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.540934   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.555494   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.555519   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.555569   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.569455   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.069801   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.069903   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.083366   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.569984   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.570078   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.585387   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.069869   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.069980   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.086900   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.570490   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.570597   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.586215   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.069601   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.069709   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.084557   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.570194   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.570306   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.586686   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.070433   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.070518   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.088460   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.570579   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.570654   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.588478   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.785543   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:38.932981   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:36.932228   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.432719   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.932863   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.432661   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.932210   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.965380   51037 api_server.go:72] duration metric: took 3.065893789s to wait for apiserver process to appear ...
	I1207 21:16:38.965409   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:38.965425   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:40.571221   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:40.571824   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:40.571873   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:40.571774   51971 retry.go:31] will retry after 2.349695925s: waiting for machine to come up
	I1207 21:16:42.923107   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:42.923582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:42.923618   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:42.923549   51971 retry.go:31] will retry after 4.503894046s: waiting for machine to come up
	I1207 21:16:40.070126   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.070229   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.085086   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:40.570237   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.570329   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.584997   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.069554   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.069706   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.084654   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.570175   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.570260   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.581973   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.070546   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.070641   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.085859   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.570428   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.570534   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.585491   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.070017   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.070132   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.082461   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.569992   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.570093   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.585221   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.069681   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.069749   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.081499   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.569999   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.570083   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.585512   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.598644   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.598675   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:43.598689   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:43.649508   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.649553   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:44.150221   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.155890   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.155914   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:44.649610   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.655402   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.655437   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:45.150082   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:45.156432   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:16:45.172948   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:16:45.172983   51037 api_server.go:131] duration metric: took 6.207566234s to wait for apiserver health ...
	I1207 21:16:45.172996   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:45.173002   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:45.175018   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:41.430106   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:43.431417   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.932644   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.176436   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:45.231836   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:45.250256   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:45.270151   51037 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:45.270188   51037 system_pods.go:61] "coredns-76f75df574-qfwbr" [577161a0-8d68-41cc-88cd-1bd56e99b7aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:45.270198   51037 system_pods.go:61] "etcd-no-preload-950431" [8e49a6a7-c1e5-469d-9b30-c8e59471effb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:45.270210   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [15bc33db-995d-4102-9a2b-e991209c2946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:45.270220   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [c263b58e-2aea-455d-8b2f-8915f1c6e820] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:45.270232   51037 system_pods.go:61] "kube-proxy-mzv22" [96e51e2f-17be-4724-ae28-99dfa63e9976] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:45.270241   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [c040d573-c78f-4149-8be6-af33fc6ea186] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:45.270257   51037 system_pods.go:61] "metrics-server-57f55c9bc5-fv8x4" [ac03a70e-1059-474f-b6f6-5974f0900bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:45.270268   51037 system_pods.go:61] "storage-provisioner" [3f942481-221c-4e69-a876-f82676cde788] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:45.270279   51037 system_pods.go:74] duration metric: took 19.99813ms to wait for pod list to return data ...
	I1207 21:16:45.270291   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:45.274636   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:45.274667   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:45.274681   51037 node_conditions.go:105] duration metric: took 4.381452ms to run NodePressure ...
	I1207 21:16:45.274700   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.597857   51037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603394   51037 kubeadm.go:787] kubelet initialised
	I1207 21:16:45.603423   51037 kubeadm.go:788] duration metric: took 5.535827ms waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603432   51037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:45.612509   51037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:47.430850   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431364   50270 main.go:141] libmachine: (old-k8s-version-483745) Found IP for machine: 192.168.61.171
	I1207 21:16:47.431389   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserving static IP address...
	I1207 21:16:47.431415   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has current primary IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431791   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.431827   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | skip adding static IP to network mk-old-k8s-version-483745 - found existing host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"}
	I1207 21:16:47.431845   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserved static IP address: 192.168.61.171
	I1207 21:16:47.431866   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting for SSH to be available...
	I1207 21:16:47.431884   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Getting to WaitForSSH function...
	I1207 21:16:47.434071   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434391   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.434423   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434511   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH client type: external
	I1207 21:16:47.434548   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa (-rw-------)
	I1207 21:16:47.434590   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:47.434624   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | About to run SSH command:
	I1207 21:16:47.434642   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | exit 0
	I1207 21:16:47.529747   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:47.530150   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetConfigRaw
	I1207 21:16:47.530743   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.533361   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.533690   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.533728   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.534019   50270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:16:47.534201   50270 machine.go:88] provisioning docker machine ...
	I1207 21:16:47.534219   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:47.534379   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534549   50270 buildroot.go:166] provisioning hostname "old-k8s-version-483745"
	I1207 21:16:47.534578   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534793   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.537037   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537448   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.537482   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537621   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.537788   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.537963   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.538107   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.538276   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.538728   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.538751   50270 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-483745 && echo "old-k8s-version-483745" | sudo tee /etc/hostname
	I1207 21:16:47.694514   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-483745
	
	I1207 21:16:47.694552   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.697720   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698181   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.698217   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.698602   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698958   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.699158   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.699617   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.699646   50270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-483745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-483745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-483745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:47.851750   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:47.851781   50270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:47.851817   50270 buildroot.go:174] setting up certificates
	I1207 21:16:47.851830   50270 provision.go:83] configureAuth start
	I1207 21:16:47.851848   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.852181   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.855229   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855607   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.855633   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855891   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.858432   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.858811   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.858868   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.859066   50270 provision.go:138] copyHostCerts
	I1207 21:16:47.859126   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:47.859146   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:47.859211   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:47.859312   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:47.859322   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:47.859352   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:47.859426   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:47.859436   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:47.859465   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:47.859532   50270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-483745 san=[192.168.61.171 192.168.61.171 localhost 127.0.0.1 minikube old-k8s-version-483745]
	I1207 21:16:48.080700   50270 provision.go:172] copyRemoteCerts
	I1207 21:16:48.080764   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:48.080787   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.083799   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084261   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.084325   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084545   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.084752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.084874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.085025   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.188586   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:48.217051   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:16:48.245046   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:48.276344   50270 provision.go:86] duration metric: configureAuth took 424.496766ms
	I1207 21:16:48.276381   50270 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:48.276627   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:16:48.276720   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.280119   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280556   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.280627   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280943   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.281127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281312   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281452   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.281621   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.282136   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.282160   50270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:45.070516   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:45.070618   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:45.087880   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:45.541593   51113 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:45.541627   51113 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:45.541640   51113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:45.541714   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:45.589291   51113 cri.go:89] found id: ""
	I1207 21:16:45.589394   51113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:45.606397   51113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:45.616135   51113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:45.616192   51113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625661   51113 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625689   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.750072   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.619750   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.838835   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.935494   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:47.007474   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:47.007536   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.020817   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.536948   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.036982   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.537584   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.036899   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.537400   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.575582   51113 api_server.go:72] duration metric: took 2.568102787s to wait for apiserver process to appear ...
	I1207 21:16:49.575614   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:49.575636   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576140   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:49.576174   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576630   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:48.639642   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:48.639702   50270 machine.go:91] provisioned docker machine in 1.10547448s
	I1207 21:16:48.639715   50270 start.go:300] post-start starting for "old-k8s-version-483745" (driver="kvm2")
	I1207 21:16:48.639733   50270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:48.639772   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.640106   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:48.640136   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.643155   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643592   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.643625   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643897   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.644101   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.644253   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.644374   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.756527   50270 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:48.761976   50270 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:48.762042   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:48.762117   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:48.762229   50270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:48.762355   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:48.773495   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:48.802433   50270 start.go:303] post-start completed in 162.696963ms
	I1207 21:16:48.802464   50270 fix.go:56] fixHost completed within 20.771337135s
	I1207 21:16:48.802489   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.805389   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.805821   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.805853   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.806002   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.806221   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806361   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806516   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.806737   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.807177   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.807194   50270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:48.948515   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983808.895290650
	
	I1207 21:16:48.948602   50270 fix.go:206] guest clock: 1701983808.895290650
	I1207 21:16:48.948622   50270 fix.go:219] Guest: 2023-12-07 21:16:48.89529065 +0000 UTC Remote: 2023-12-07 21:16:48.802469186 +0000 UTC m=+365.320601213 (delta=92.821464ms)
	I1207 21:16:48.948679   50270 fix.go:190] guest clock delta is within tolerance: 92.821464ms
	I1207 21:16:48.948694   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 20.917606045s
	I1207 21:16:48.948726   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.948967   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:48.952007   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952392   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.952424   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952680   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953302   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953494   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953578   50270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:48.953633   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.953877   50270 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:48.953904   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.957083   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957288   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957631   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957656   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957798   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957849   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958105   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958110   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958284   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958443   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958665   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.958668   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:49.082678   50270 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:49.091075   50270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:49.250638   50270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:49.259237   50270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:49.259312   50270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:49.279490   50270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:49.279520   50270 start.go:475] detecting cgroup driver to use...
	I1207 21:16:49.279592   50270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:49.301129   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:49.317758   50270 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:49.317832   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:49.335384   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:49.352808   50270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:49.487177   50270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:49.622551   50270 docker.go:219] disabling docker service ...
	I1207 21:16:49.622632   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:49.641913   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:49.655046   50270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:49.780471   50270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:49.903816   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:49.917447   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:49.939101   50270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:16:49.939170   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.949112   50270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:49.949187   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.958706   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.968115   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.977516   50270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:49.987974   50270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:49.996996   50270 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:49.997069   50270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:50.009736   50270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:50.018888   50270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:50.136461   50270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:50.337931   50270 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:50.338013   50270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:50.344175   50270 start.go:543] Will wait 60s for crictl version
	I1207 21:16:50.344237   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:50.348418   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:50.387227   50270 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:50.387329   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.439820   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.492743   50270 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1207 21:16:48.431193   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.945823   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:47.635909   51037 pod_ready.go:102] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:49.635091   51037 pod_ready.go:92] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:49.635119   51037 pod_ready.go:81] duration metric: took 4.022584638s waiting for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:49.635139   51037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:51.656178   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.494290   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:50.496890   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497226   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:50.497257   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497557   50270 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:50.501988   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:50.516192   50270 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 21:16:50.516266   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:50.564641   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:50.564723   50270 ssh_runner.go:195] Run: which lz4
	I1207 21:16:50.569306   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:50.573458   50270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:50.573483   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1207 21:16:52.405191   50270 crio.go:444] Took 1.835925 seconds to copy over tarball
	I1207 21:16:52.405260   50270 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:50.077304   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.602961   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.602994   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:54.603007   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.660014   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.660053   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:55.077712   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.102038   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.102068   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:55.577664   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.586714   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.586753   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:56.077361   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:56.084665   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:16:56.096164   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:56.096196   51113 api_server.go:131] duration metric: took 6.520574302s to wait for apiserver health ...
	I1207 21:16:56.096209   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:56.096219   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:53.431611   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.954091   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:53.656773   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.659213   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:56.811148   51113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:55.499497   50270 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094207903s)
	I1207 21:16:55.499524   50270 crio.go:451] Took 3.094311 seconds to extract the tarball
	I1207 21:16:55.499532   50270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:55.539952   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:55.612029   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:55.612059   50270 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:55.612164   50270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:16:55.612282   50270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.612335   50270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.612433   50270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.612564   50270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.612575   50270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614472   50270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.614507   50270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.614513   50270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.614571   50270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.744531   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1207 21:16:55.744539   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.747157   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.748014   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.754498   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.778012   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.781417   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.886272   50270 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1207 21:16:55.886318   50270 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1207 21:16:55.886371   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.949015   50270 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1207 21:16:55.949128   50270 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.949205   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.963217   50270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1207 21:16:55.963332   50270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.963422   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.966733   50270 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1207 21:16:55.966854   50270 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.966934   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.004614   50270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1207 21:16:56.004668   50270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.004721   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.015557   50270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1207 21:16:56.015655   50270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.015714   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017603   50270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1207 21:16:56.017643   50270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.017686   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017817   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1207 21:16:56.017913   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:56.018011   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:56.018087   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1207 21:16:56.018160   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.028183   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.030370   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.222552   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:16:56.222625   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1207 21:16:56.222673   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1207 21:16:56.222680   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.222731   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1207 21:16:56.222828   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1207 21:16:56.222911   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1207 21:16:56.236367   50270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1207 21:16:56.236387   50270 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236440   50270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236444   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1207 21:16:56.455526   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:58.094353   50270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.638791166s)
	I1207 21:16:58.094525   50270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.858047565s)
	I1207 21:16:58.094552   50270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1207 21:16:58.094591   50270 cache_images.go:92] LoadImages completed in 2.482516651s
	W1207 21:16:58.094650   50270 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1207 21:16:58.094729   50270 ssh_runner.go:195] Run: crio config
	I1207 21:16:58.191059   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:16:58.191083   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:58.191108   50270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:58.191132   50270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.171 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-483745 NodeName:old-k8s-version-483745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 21:16:58.191279   50270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-483745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-483745
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.171:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:58.191389   50270 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-483745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:58.191462   50270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1207 21:16:58.204882   50270 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:58.204948   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:58.217370   50270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1207 21:16:58.237205   50270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:58.256539   50270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1207 21:16:58.276428   50270 ssh_runner.go:195] Run: grep 192.168.61.171	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:58.281568   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:58.295073   50270 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745 for IP: 192.168.61.171
	I1207 21:16:58.295112   50270 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:58.295295   50270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:58.295368   50270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:58.295493   50270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.key
	I1207 21:16:58.295589   50270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key.13a54c20
	I1207 21:16:58.295658   50270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key
	I1207 21:16:58.295817   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:58.295861   50270 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:58.295887   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:58.295922   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:58.295972   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:58.296012   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:58.296067   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:58.296936   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:58.327708   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:58.354646   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:58.379025   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:16:58.404362   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:58.433648   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:58.459739   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:58.487457   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:58.516507   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:57.214999   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:57.244196   51113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:57.264778   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:57.978177   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:57.978214   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:57.978224   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:57.978232   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:57.978241   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:57.978248   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:57.978254   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:57.978261   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:57.978267   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:57.978276   51113 system_pods.go:74] duration metric: took 713.475246ms to wait for pod list to return data ...
	I1207 21:16:57.978285   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:57.983354   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:57.983379   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:57.983389   51113 node_conditions.go:105] duration metric: took 5.099916ms to run NodePressure ...
	I1207 21:16:57.983403   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:58.583287   51113 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590472   51113 kubeadm.go:787] kubelet initialised
	I1207 21:16:58.590500   51113 kubeadm.go:788] duration metric: took 7.176115ms waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590509   51113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:58.597622   51113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.609459   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609491   51113 pod_ready.go:81] duration metric: took 11.841558ms waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.609503   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609513   51113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.620143   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620172   51113 pod_ready.go:81] duration metric: took 10.647465ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.620185   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620193   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.633821   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633850   51113 pod_ready.go:81] duration metric: took 13.645914ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.633864   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633872   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.647333   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647359   51113 pod_ready.go:81] duration metric: took 13.477348ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.647373   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647385   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.988420   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988448   51113 pod_ready.go:81] duration metric: took 341.054838ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.988457   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988465   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.388053   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388080   51113 pod_ready.go:81] duration metric: took 399.605098ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.388090   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388097   51113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.787887   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787913   51113 pod_ready.go:81] duration metric: took 399.809388ms waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.787925   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787932   51113 pod_ready.go:38] duration metric: took 1.197413161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:59.787945   51113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:16:59.801806   51113 ops.go:34] apiserver oom_adj: -16
	I1207 21:16:59.801828   51113 kubeadm.go:640] restartCluster took 24.28763849s
	I1207 21:16:59.801837   51113 kubeadm.go:406] StartCluster complete in 24.334230687s
	I1207 21:16:59.801855   51113 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.801945   51113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:16:59.804179   51113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.804458   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:16:59.804515   51113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:16:59.804612   51113 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804638   51113 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.804646   51113 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:16:59.804695   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:59.804714   51113 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804727   51113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-275828"
	I1207 21:16:59.804704   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805119   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805150   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805168   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805180   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805204   51113 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.805226   51113 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.805235   51113 addons.go:240] addon metrics-server should already be in state true
	I1207 21:16:59.805277   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805627   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805663   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.811657   51113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-275828" context rescaled to 1 replicas
	I1207 21:16:59.811696   51113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:16:59.814005   51113 out.go:177] * Verifying Kubernetes components...
	I1207 21:16:59.815636   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:16:59.822134   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1207 21:16:59.822558   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.822636   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I1207 21:16:59.822718   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1207 21:16:59.823063   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823104   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823126   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823128   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823479   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823605   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823619   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823943   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823970   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.824050   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824102   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.824193   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.824463   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824502   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.828241   51113 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.828264   51113 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:16:59.828292   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.828676   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.830577   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.841996   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1207 21:16:59.842283   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I1207 21:16:59.842697   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.842888   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.843254   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843277   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843391   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843416   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843638   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843779   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843831   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.843973   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.845644   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.845852   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.847586   51113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:59.847253   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1207 21:16:59.849062   51113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:16:57.998272   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:00.429603   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.850487   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:16:59.850500   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:16:59.850514   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849121   51113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:16:59.850564   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:16:59.850583   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849452   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.851054   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.851071   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.851664   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.852274   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.852315   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.854738   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855190   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.855204   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855394   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.855556   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.855649   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.855724   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.856210   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.856596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856720   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.856846   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.857188   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.857324   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.871856   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1207 21:16:59.872193   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.872726   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.872744   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.873088   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.873243   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.874542   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.874803   51113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:16:59.874821   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:16:59.874840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.877142   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877524   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.877547   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877753   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.877889   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.878024   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.878137   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.983279   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:17:00.040397   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:17:00.056981   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:17:00.057008   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:17:00.078195   51113 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 21:17:00.078235   51113 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:00.117369   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:17:00.117399   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:17:00.177756   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:00.177783   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:17:00.220667   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:01.338599   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.298167461s)
	I1207 21:17:01.338648   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338662   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338747   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.355434262s)
	I1207 21:17:01.338789   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338802   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338925   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.338945   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.338960   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338969   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340360   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340381   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340357   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340368   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340472   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340490   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.340504   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340785   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340788   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340804   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347722   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.347741   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.347933   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.347950   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434021   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213311264s)
	I1207 21:17:01.434084   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434099   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434391   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434413   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434410   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434423   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434434   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434627   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434637   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434648   51113 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-275828"
	I1207 21:17:01.436476   51113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:16:57.997177   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.154238   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.154261   51037 pod_ready.go:81] duration metric: took 9.519115953s waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.154270   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159402   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.159421   51037 pod_ready.go:81] duration metric: took 5.143876ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159431   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164107   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.164124   51037 pod_ready.go:81] duration metric: took 4.684573ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164134   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168711   51037 pod_ready.go:92] pod "kube-proxy-mzv22" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.168727   51037 pod_ready.go:81] duration metric: took 4.587318ms waiting for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168734   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201648   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.201676   51037 pod_ready.go:81] duration metric: took 32.935891ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201688   51037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:01.509707   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:58.544765   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:58.571376   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:58.597700   50270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:58.616720   50270 ssh_runner.go:195] Run: openssl version
	I1207 21:16:58.622830   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:58.634656   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640469   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640526   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.646624   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:58.660113   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:58.670742   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675735   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675782   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.682821   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:58.696760   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:58.710547   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.716983   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.717048   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.724400   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:58.736496   50270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:58.742587   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:58.750398   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:58.757537   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:58.764361   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:58.771280   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:58.778697   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:58.785873   50270 kubeadm.go:404] StartCluster: {Name:old-k8s-version-483745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:58.786022   50270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:58.786079   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:58.834174   50270 cri.go:89] found id: ""
	I1207 21:16:58.834262   50270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:58.845932   50270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:58.845958   50270 kubeadm.go:636] restartCluster start
	I1207 21:16:58.846025   50270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:58.855982   50270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.857458   50270 kubeconfig.go:92] found "old-k8s-version-483745" server: "https://192.168.61.171:8443"
	I1207 21:16:58.860840   50270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:58.870183   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.870235   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.881631   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.881647   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.881693   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.892422   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.393094   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.393163   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.405578   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.893104   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.893160   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.906998   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.393560   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.393629   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.405837   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.893376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.893472   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.905785   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.393118   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.393204   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.405693   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.893214   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.893348   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.906272   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.392588   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.392682   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.404717   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.893325   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.893425   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.906705   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:03.392549   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.392627   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.406493   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.437892   51113 addons.go:502] enable addons completed in 1.633389199s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:17:02.198851   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:04.199518   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:02.931262   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.431344   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.509733   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.511779   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.892711   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.892814   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.905553   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.393144   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.393236   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.406280   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.893375   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.893459   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.905715   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.393376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.393473   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.405757   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.892719   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.892800   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.906258   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.392706   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.392787   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.405913   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.893392   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.893475   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.908660   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.392944   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.393037   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.408113   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.892488   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.892602   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.905157   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:08.393126   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:08.393209   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:08.405227   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.197790   51113 node_ready.go:49] node "default-k8s-diff-port-275828" has status "Ready":"True"
	I1207 21:17:05.197814   51113 node_ready.go:38] duration metric: took 5.119553512s waiting for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:05.197825   51113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:17:05.204644   51113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:07.225887   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.229380   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:07.928733   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.929797   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.009114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:10.012079   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.870396   50270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:17:08.870427   50270 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:17:08.870439   50270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:17:08.870496   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:17:08.914337   50270 cri.go:89] found id: ""
	I1207 21:17:08.914412   50270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:17:08.932406   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:17:08.941877   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:17:08.942012   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952016   50270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952038   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.086175   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.811331   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.044161   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.117851   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.218309   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:17:10.218376   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.231007   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.754756   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.255150   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.755138   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.782482   50270 api_server.go:72] duration metric: took 1.564169408s to wait for apiserver process to appear ...
	I1207 21:17:11.782510   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:17:11.782543   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:11.729870   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.727588   51113 pod_ready.go:92] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.727621   51113 pod_ready.go:81] duration metric: took 7.52294973s waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.727635   51113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733893   51113 pod_ready.go:92] pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.733936   51113 pod_ready.go:81] duration metric: took 6.276731ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733951   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739431   51113 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.739456   51113 pod_ready.go:81] duration metric: took 5.495838ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739467   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745435   51113 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.745456   51113 pod_ready.go:81] duration metric: took 5.98053ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745468   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751301   51113 pod_ready.go:92] pod "kube-proxy-nmx2z" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.751323   51113 pod_ready.go:81] duration metric: took 5.845741ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751333   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122896   51113 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:13.122923   51113 pod_ready.go:81] duration metric: took 371.582675ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122936   51113 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:11.931676   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.433505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.510180   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.511615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.519216   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.783319   50270 api_server.go:269] stopped: https://192.168.61.171:8443/healthz: Get "https://192.168.61.171:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 21:17:16.783432   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.468175   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:17:17.468210   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:17:17.968919   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.975181   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:17.975206   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.469287   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.476311   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:18.476340   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.968605   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.974285   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:17:18.981956   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:17:18.981983   50270 api_server.go:131] duration metric: took 7.199466057s to wait for apiserver health ...
	I1207 21:17:18.981994   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:17:18.982000   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:17:18.983962   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:17:15.433488   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:17.434321   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.931755   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.430606   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.010615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.512114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:18.985481   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:17:18.994841   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:17:19.015418   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:17:19.029654   50270 system_pods.go:59] 7 kube-system pods found
	I1207 21:17:19.029685   50270 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:17:19.029692   50270 system_pods.go:61] "etcd-old-k8s-version-483745" [4a920248-1b35-4834-9e6f-a0e7567b5bb8] Running
	I1207 21:17:19.029699   50270 system_pods.go:61] "kube-apiserver-old-k8s-version-483745" [aaba6fb9-56a1-497d-a398-5c685f5500dd] Running
	I1207 21:17:19.029706   50270 system_pods.go:61] "kube-controller-manager-old-k8s-version-483745" [a13bda00-a0f4-4f59-8b52-65589579efcf] Running
	I1207 21:17:19.029711   50270 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:17:19.029715   50270 system_pods.go:61] "kube-scheduler-old-k8s-version-483745" [4fc3e12a-e294-457e-912f-0ed765ad4def] Running
	I1207 21:17:19.029718   50270 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:17:19.029726   50270 system_pods.go:74] duration metric: took 14.290629ms to wait for pod list to return data ...
	I1207 21:17:19.029739   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:17:19.033868   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:17:19.033897   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:17:19.033911   50270 node_conditions.go:105] duration metric: took 4.166175ms to run NodePressure ...
	I1207 21:17:19.033945   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:19.284413   50270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:17:19.288373   50270 retry.go:31] will retry after 182.556746ms: kubelet not initialised
	I1207 21:17:19.479987   50270 retry.go:31] will retry after 253.110045ms: kubelet not initialised
	I1207 21:17:19.744586   50270 retry.go:31] will retry after 608.133785ms: kubelet not initialised
	I1207 21:17:20.357758   50270 retry.go:31] will retry after 829.182382ms: kubelet not initialised
	I1207 21:17:21.192621   50270 retry.go:31] will retry after 998.365497ms: kubelet not initialised
	I1207 21:17:22.196882   50270 retry.go:31] will retry after 1.144379185s: kubelet not initialised
	I1207 21:17:23.346660   50270 retry.go:31] will retry after 4.175853771s: kubelet not initialised
	I1207 21:17:19.937119   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:22.433221   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.430858   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:23.929526   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:25.932244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:24.011486   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.509908   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.529200   50270 retry.go:31] will retry after 6.099259697s: kubelet not initialised
	I1207 21:17:24.932035   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.932432   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:28.935455   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.933244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:30.431008   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:29.009917   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.509259   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.432441   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.933226   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:32.431713   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:34.931903   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.510686   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:35.511611   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.635018   50270 retry.go:31] will retry after 3.426713545s: kubelet not initialised
	I1207 21:17:37.067021   50270 retry.go:31] will retry after 7.020738309s: kubelet not initialised
	I1207 21:17:35.933872   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.432200   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:37.432208   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:39.432443   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.008964   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.013143   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.434554   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.935808   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:41.931614   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.431445   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.510798   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:45.010221   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.093245   50270 retry.go:31] will retry after 15.092242293s: kubelet not initialised
	I1207 21:17:45.433353   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.933249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:46.931078   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.430564   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.510355   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:50.010022   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.935001   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.433167   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:51.430664   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:53.431310   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.431508   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.010127   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:54.937299   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.432126   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.929516   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.929800   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.511723   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:00.010732   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.190582   50270 retry.go:31] will retry after 18.708242221s: kubelet not initialised
	I1207 21:17:59.932898   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.435773   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.429487   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.931336   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.011470   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.508873   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:06.510378   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.932311   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.434111   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.431033   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.931058   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.009614   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.009942   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.932527   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.933100   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.432890   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:12.429420   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.431778   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:13.010085   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:15.509812   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:17.907480   50270 kubeadm.go:787] kubelet initialised
	I1207 21:18:17.907516   50270 kubeadm.go:788] duration metric: took 58.6230723s waiting for restarted kubelet to initialise ...
	I1207 21:18:17.907523   50270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:18:17.912349   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917692   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.917710   50270 pod_ready.go:81] duration metric: took 5.339125ms waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917718   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923173   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.923192   50270 pod_ready.go:81] duration metric: took 5.469466ms waiting for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923200   50270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928824   50270 pod_ready.go:92] pod "etcd-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.928846   50270 pod_ready.go:81] duration metric: took 5.638159ms waiting for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928856   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.934993   50270 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.935014   50270 pod_ready.go:81] duration metric: took 6.149728ms waiting for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.935025   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311907   50270 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.311934   50270 pod_ready.go:81] duration metric: took 376.900024ms waiting for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311947   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:16.931768   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:16.930954   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932194   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.009341   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.010383   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.709795   50270 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.709818   50270 pod_ready.go:81] duration metric: took 397.865434ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.709828   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107018   50270 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:19.107046   50270 pod_ready.go:81] duration metric: took 397.21085ms waiting for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107074   50270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:21.413113   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.414993   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.937780   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.432192   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:21.429764   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.430826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.930929   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:22.510894   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.009872   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.914333   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.914486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.432249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.432529   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.930973   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.430718   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.510016   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.009983   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.415400   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.912237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:29.932694   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.433150   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.432680   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.931118   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.010572   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.508896   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.509628   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.913374   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.914250   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.933409   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.432655   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.432740   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.430165   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.930630   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.009629   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.009658   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:38.914325   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:40.915158   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.413980   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.932574   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.432525   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:42.431330   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.929635   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.009978   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.010954   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.414082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.415225   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:46.932342   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:48.932460   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.429890   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.931948   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.508820   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.508885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.510909   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.916969   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.414590   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.431888   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:53.432497   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.429836   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.429987   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.010442   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.520121   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.415187   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.914505   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:55.433372   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:57.437496   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.932937   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.430774   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.010885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.510473   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.413820   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.413911   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.414163   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.932159   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.932344   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:04.432873   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.430926   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.930199   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.930253   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.511496   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.512541   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.913832   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.915554   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:06.433629   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.933148   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.931760   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.431655   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.009852   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.010279   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.415114   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.913846   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:11.433166   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:13.933572   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.930147   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.935480   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.010617   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.510815   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:15.414959   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.913372   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:16.433375   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:18.932915   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.436017   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.933613   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.008855   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.010583   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.510650   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.913760   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.913931   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.434113   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.932185   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:22.429942   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.432486   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.009731   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.513595   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.913964   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:25.915033   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.415173   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.433721   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.932763   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.934197   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.432795   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.008998   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.011163   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:30.912991   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:32.914672   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.432802   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.932626   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.930505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.510138   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.010166   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:34.915019   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:37.414169   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:35.933595   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.432419   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.433061   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.929697   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.930753   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.509265   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.509898   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:39.414719   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:41.914208   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.932356   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.932643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.430519   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.930095   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.510763   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:44.511006   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.914874   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:46.414739   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.431904   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.930507   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.930634   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.009537   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.009825   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.010633   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:48.914101   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.413288   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:50.433022   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:52.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.930920   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:54.433488   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.508693   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.509440   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.913446   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.914532   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.416064   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.432116   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:57.935271   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:56.929900   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.931501   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.009318   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.510190   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.915025   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.414806   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.432326   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:02.432758   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:04.434643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:01.431826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.931648   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.010188   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.010498   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.914269   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.914640   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:06.931909   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.431136   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.932438   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.509186   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:09.511791   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.415605   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.918130   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.934599   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.434477   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.430502   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.434943   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.008903   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:14.010390   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:16.509062   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.415237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.914465   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.435338   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.933559   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.931293   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:18.408309   50624 pod_ready.go:81] duration metric: took 4m0.000858815s waiting for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:18.408355   50624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:18.408376   50624 pod_ready.go:38] duration metric: took 4m11.111070516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:18.408405   50624 kubeadm.go:640] restartCluster took 4m30.625453328s
	W1207 21:20:18.408479   50624 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:18.408513   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:18.510036   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:20.510485   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.915160   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:21.915544   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.940064   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:22.432481   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:24.432791   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.010158   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:25.509777   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.915685   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.414017   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.415525   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.435601   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.932153   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.009824   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.509369   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.372266   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.96372485s)
	I1207 21:20:32.372349   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:32.386002   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:20:32.395757   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:20:32.406709   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:20:32.406761   50624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:20:32.465707   50624 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:20:32.465842   50624 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:20:32.636031   50624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:20:32.636171   50624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:20:32.636296   50624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:20:32.892368   50624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:20:32.894341   50624 out.go:204]   - Generating certificates and keys ...
	I1207 21:20:32.894484   50624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:20:32.894581   50624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:20:32.894717   50624 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:20:32.894799   50624 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:20:32.895289   50624 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:20:32.895583   50624 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:20:32.896112   50624 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:20:32.896577   50624 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:20:32.897032   50624 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:20:32.897567   50624 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:20:32.897804   50624 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:20:32.897886   50624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:20:32.942322   50624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:20:33.084899   50624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:20:33.286309   50624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:20:33.482188   50624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:20:33.483077   50624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:20:33.487928   50624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:20:30.912937   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.914703   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.934926   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.431849   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.489853   50624 out.go:204]   - Booting up control plane ...
	I1207 21:20:33.490021   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:20:33.490177   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:20:33.490458   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:20:33.509319   50624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:20:33.509448   50624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:20:33.509501   50624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:20:33.654452   50624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:20:32.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.510930   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.918486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.414467   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:35.432767   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.931132   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.009506   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.011200   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.509897   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.657033   50624 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003082 seconds
	I1207 21:20:41.657193   50624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:20:41.673142   50624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:20:42.218438   50624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:20:42.218706   50624 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-598346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:20:42.745090   50624 kubeadm.go:322] [bootstrap-token] Using token: 74zooz.4uhmxlwojs4pjw69
	I1207 21:20:42.746934   50624 out.go:204]   - Configuring RBAC rules ...
	I1207 21:20:42.747111   50624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:20:42.762521   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:20:42.776210   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:20:42.781152   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:20:42.786698   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:20:42.795815   50624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:20:42.811407   50624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:20:43.073430   50624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:20:43.167611   50624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:20:43.168880   50624 kubeadm.go:322] 
	I1207 21:20:43.168970   50624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:20:43.169014   50624 kubeadm.go:322] 
	I1207 21:20:43.169111   50624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:20:43.169132   50624 kubeadm.go:322] 
	I1207 21:20:43.169163   50624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:20:43.169239   50624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:20:43.169314   50624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:20:43.169322   50624 kubeadm.go:322] 
	I1207 21:20:43.169394   50624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:20:43.169402   50624 kubeadm.go:322] 
	I1207 21:20:43.169475   50624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:20:43.169500   50624 kubeadm.go:322] 
	I1207 21:20:43.169591   50624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:20:43.169701   50624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:20:43.169799   50624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:20:43.169811   50624 kubeadm.go:322] 
	I1207 21:20:43.169930   50624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:20:43.170066   50624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:20:43.170078   50624 kubeadm.go:322] 
	I1207 21:20:43.170177   50624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170303   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:20:43.170332   50624 kubeadm.go:322] 	--control-plane 
	I1207 21:20:43.170338   50624 kubeadm.go:322] 
	I1207 21:20:43.170463   50624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:20:43.170474   50624 kubeadm.go:322] 
	I1207 21:20:43.170590   50624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170717   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:20:43.171438   50624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:20:43.171461   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:20:43.171467   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:20:43.173556   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:20:39.415520   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.416257   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.933233   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.933860   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:44.432482   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.175267   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:20:43.199404   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:20:43.237091   50624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:20:43.237150   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.237203   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=embed-certs-598346 minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.303369   50624 ops.go:34] apiserver oom_adj: -16
	I1207 21:20:43.670500   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.788364   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.394973   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.894494   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.394695   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.895141   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.509949   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.511007   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.915384   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.916082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:47.916757   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.432649   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:48.434738   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.394706   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:46.894743   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.395117   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.894780   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.395408   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.895349   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.394860   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.894472   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.395102   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.895157   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.512284   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.011848   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.413787   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.913793   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.933240   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.935428   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:51.394691   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:51.895193   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.395131   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.894787   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.394652   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.895139   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.395160   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.895153   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.394410   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.584599   50624 kubeadm.go:1088] duration metric: took 12.347498848s to wait for elevateKubeSystemPrivileges.
	I1207 21:20:55.584628   50624 kubeadm.go:406] StartCluster complete in 5m7.857234007s
	I1207 21:20:55.584645   50624 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.584733   50624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:20:55.587311   50624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.587607   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:20:55.587630   50624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:20:55.587708   50624 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-598346"
	I1207 21:20:55.587716   50624 addons.go:69] Setting default-storageclass=true in profile "embed-certs-598346"
	I1207 21:20:55.587728   50624 addons.go:69] Setting metrics-server=true in profile "embed-certs-598346"
	I1207 21:20:55.587739   50624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-598346"
	I1207 21:20:55.587760   50624 addons.go:231] Setting addon metrics-server=true in "embed-certs-598346"
	W1207 21:20:55.587769   50624 addons.go:240] addon metrics-server should already be in state true
	I1207 21:20:55.587826   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587736   50624 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-598346"
	W1207 21:20:55.587852   50624 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:20:55.587901   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587824   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:20:55.588192   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588202   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588223   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588224   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588284   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588308   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.605717   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I1207 21:20:55.605750   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1207 21:20:55.605726   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1207 21:20:55.606254   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606305   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606338   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606778   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606803   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606823   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606844   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606826   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606904   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.607178   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607218   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607274   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607420   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.607776   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607816   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.607818   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607849   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.610610   50624 addons.go:231] Setting addon default-storageclass=true in "embed-certs-598346"
	W1207 21:20:55.610628   50624 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:20:55.610647   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.610902   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.610927   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.624530   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I1207 21:20:55.624997   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.625474   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.625492   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.625833   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.626016   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.626236   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1207 21:20:55.626715   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627093   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1207 21:20:55.627538   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627700   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.627709   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628044   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.628061   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628109   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.628112   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.629910   50624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:20:55.628721   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.628756   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.631270   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.631338   50624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.631357   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:20:55.631371   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.631724   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.634618   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.636632   50624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:20:55.635162   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.635740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.638311   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:20:55.638331   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:20:55.638354   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.638318   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.638427   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.638930   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.639110   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.639264   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.642987   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643401   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.643432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643605   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.643794   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.643947   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.644065   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.649214   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I1207 21:20:55.649604   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.650085   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.650106   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.650583   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.650740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.657356   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.657691   50624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.657708   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:20:55.657727   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.659345   50624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-598346" context rescaled to 1 replicas
	I1207 21:20:55.659381   50624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:20:55.660949   50624 out.go:177] * Verifying Kubernetes components...
	I1207 21:20:55.662172   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:55.661748   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662288   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.662323   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662617   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.662821   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.662992   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.663175   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.825166   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.850131   50624 node_ready.go:35] waiting up to 6m0s for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.850203   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:20:55.850365   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:20:55.850378   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:20:55.879031   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.896010   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:20:55.896034   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:20:55.910575   50624 node_ready.go:49] node "embed-certs-598346" has status "Ready":"True"
	I1207 21:20:55.910603   50624 node_ready.go:38] duration metric: took 60.438039ms waiting for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.910615   50624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:55.976847   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:55.976874   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:20:55.981345   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:56.068591   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:52.509374   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:55.012033   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:54.915300   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.414020   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.761169   50624 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761195   50624 pod_ready.go:81] duration metric: took 1.779826027s waiting for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:57.761205   50624 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761212   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.813172   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962919124s)
	I1207 21:20:58.813238   50624 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:20:58.813195   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.934130104s)
	I1207 21:20:58.813281   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813299   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813520   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.988311627s)
	I1207 21:20:58.813560   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813572   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813757   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.813776   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.813787   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813796   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813831   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814093   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814097   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814110   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814132   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.814152   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.814511   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814531   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.839304   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.839329   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.839611   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.839653   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.839663   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.859922   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.791233211s)
	I1207 21:20:58.859979   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.859998   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860412   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860469   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860483   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.860495   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860430   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.860749   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860768   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860778   50624 addons.go:467] Verifying addon metrics-server=true in "embed-certs-598346"
	I1207 21:20:58.863874   50624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:20:55.431955   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.434174   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:58.865423   50624 addons.go:502] enable addons completed in 3.277791662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:20:58.894841   50624 pod_ready.go:92] pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.894877   50624 pod_ready.go:81] duration metric: took 1.133651819s waiting for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.894891   50624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.906981   50624 pod_ready.go:92] pod "etcd-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.907009   50624 pod_ready.go:81] duration metric: took 12.109561ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.907020   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918176   50624 pod_ready.go:92] pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.918198   50624 pod_ready.go:81] duration metric: took 11.169952ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918211   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928763   50624 pod_ready.go:92] pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.928791   50624 pod_ready.go:81] duration metric: took 10.570922ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928804   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163618   50624 pod_ready.go:92] pod "kube-proxy-h4pmv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.163652   50624 pod_ready.go:81] duration metric: took 1.234839709s waiting for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163664   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455887   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.455909   50624 pod_ready.go:81] duration metric: took 292.236645ms waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455917   50624 pod_ready.go:38] duration metric: took 4.545291617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:00.455932   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:00.455974   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:00.474126   50624 api_server.go:72] duration metric: took 4.814712718s to wait for apiserver process to appear ...
	I1207 21:21:00.474151   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:00.474170   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:21:00.480909   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:21:00.482468   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:00.482491   50624 api_server.go:131] duration metric: took 8.332499ms to wait for apiserver health ...
	I1207 21:21:00.482500   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:00.658932   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:00.658965   50624 system_pods.go:61] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:00.658973   50624 system_pods.go:61] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:00.658980   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:00.658986   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:00.658992   50624 system_pods.go:61] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:00.658999   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:00.659010   50624 system_pods.go:61] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:00.659018   50624 system_pods.go:61] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:00.659036   50624 system_pods.go:74] duration metric: took 176.530206ms to wait for pod list to return data ...
	I1207 21:21:00.659049   50624 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:00.853965   50624 default_sa.go:45] found service account: "default"
	I1207 21:21:00.853997   50624 default_sa.go:55] duration metric: took 194.939162ms for default service account to be created ...
	I1207 21:21:00.854008   50624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:01.058565   50624 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:01.058594   50624 system_pods.go:89] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:01.058600   50624 system_pods.go:89] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:01.058604   50624 system_pods.go:89] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:01.058609   50624 system_pods.go:89] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:01.058613   50624 system_pods.go:89] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:01.058617   50624 system_pods.go:89] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:01.058634   50624 system_pods.go:89] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:01.058640   50624 system_pods.go:89] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:01.058651   50624 system_pods.go:126] duration metric: took 204.636417ms to wait for k8s-apps to be running ...
	I1207 21:21:01.058664   50624 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:01.058707   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:01.081694   50624 system_svc.go:56] duration metric: took 23.018184ms WaitForService to wait for kubelet.
	I1207 21:21:01.081719   50624 kubeadm.go:581] duration metric: took 5.422310896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:01.081736   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:01.254804   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:01.254838   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:01.254851   50624 node_conditions.go:105] duration metric: took 173.110501ms to run NodePressure ...
	I1207 21:21:01.254866   50624 start.go:228] waiting for startup goroutines ...
	I1207 21:21:01.254875   50624 start.go:233] waiting for cluster config update ...
	I1207 21:21:01.254888   50624 start.go:242] writing updated cluster config ...
	I1207 21:21:01.255260   50624 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:01.312696   50624 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:01.314740   50624 out.go:177] * Done! kubectl is now configured to use "embed-certs-598346" cluster and "default" namespace by default
	I1207 21:20:57.510167   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.202324   51037 pod_ready.go:81] duration metric: took 4m0.000618876s waiting for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:59.202361   51037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:59.202386   51037 pod_ready.go:38] duration metric: took 4m13.59894194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:59.202417   51037 kubeadm.go:640] restartCluster took 4m34.848470509s
	W1207 21:20:59.202490   51037 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:59.202525   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:59.416072   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.416132   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.932924   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.933678   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:04.432068   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:03.914100   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.414149   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.432277   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.432456   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.914660   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:10.927167   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.414941   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.233635   51037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.031083103s)
	I1207 21:21:13.233717   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:13.246941   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:21:13.256697   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:21:13.265143   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:21:13.265188   51037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:21:13.323766   51037 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:21:13.323875   51037 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:21:13.477749   51037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:21:13.477938   51037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:21:13.478083   51037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:21:13.750607   51037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:21:13.752541   51037 out.go:204]   - Generating certificates and keys ...
	I1207 21:21:13.752655   51037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:21:13.752735   51037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:21:13.752887   51037 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:21:13.753031   51037 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:21:13.753250   51037 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:21:13.753432   51037 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:21:13.753647   51037 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:21:13.753850   51037 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:21:13.754167   51037 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:21:13.755114   51037 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:21:13.755889   51037 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:21:13.756020   51037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:21:13.859938   51037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:21:14.193613   51037 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:21:14.239766   51037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:21:14.448306   51037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:21:14.537558   51037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:21:14.538242   51037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:21:14.542910   51037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:21:10.432632   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:12.932769   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.123869   51113 pod_ready.go:81] duration metric: took 4m0.000917841s waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:21:13.123898   51113 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:21:13.123907   51113 pod_ready.go:38] duration metric: took 4m7.926070649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:13.123923   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:13.123951   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:13.124010   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:13.197887   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.197918   51113 cri.go:89] found id: ""
	I1207 21:21:13.197947   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:13.198016   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.203887   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:13.203953   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:13.250727   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:13.250754   51113 cri.go:89] found id: ""
	I1207 21:21:13.250766   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:13.250823   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.255837   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:13.255881   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:13.297690   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:13.297719   51113 cri.go:89] found id: ""
	I1207 21:21:13.297729   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:13.297786   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.303238   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:13.303301   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:13.349838   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:13.349879   51113 cri.go:89] found id: ""
	I1207 21:21:13.349890   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:13.349960   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.354368   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:13.354423   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:13.394201   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.394230   51113 cri.go:89] found id: ""
	I1207 21:21:13.394240   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:13.394298   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.398418   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:13.398489   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:13.443027   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.443055   51113 cri.go:89] found id: ""
	I1207 21:21:13.443065   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:13.443129   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.447530   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:13.447601   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:13.491670   51113 cri.go:89] found id: ""
	I1207 21:21:13.491712   51113 logs.go:284] 0 containers: []
	W1207 21:21:13.491720   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:13.491735   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:13.491795   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:13.541386   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.541414   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:13.541421   51113 cri.go:89] found id: ""
	I1207 21:21:13.541430   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:13.541491   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.546270   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.551524   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:13.551549   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:13.630073   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:13.630119   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.680287   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:13.680318   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.733406   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:13.733442   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:13.751810   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:13.751845   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:13.905859   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:13.905889   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.950595   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:13.950626   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.993833   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:13.993862   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:14.488205   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:14.488242   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:14.531169   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:14.531201   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:14.588229   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:14.588268   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:14.642280   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:14.642310   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:14.693027   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:14.693062   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:14.544787   51037 out.go:204]   - Booting up control plane ...
	I1207 21:21:14.544925   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:21:14.545032   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:21:14.545988   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:21:14.565092   51037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:21:14.566289   51037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:21:14.566356   51037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:21:14.723698   51037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:21:15.913198   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.914942   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.234321   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:17.253156   51113 api_server.go:72] duration metric: took 4m17.441427611s to wait for apiserver process to appear ...
	I1207 21:21:17.253187   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:17.253223   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:17.253330   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:17.301526   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.301557   51113 cri.go:89] found id: ""
	I1207 21:21:17.301573   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:17.301631   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.306049   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:17.306124   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:17.359167   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:17.359195   51113 cri.go:89] found id: ""
	I1207 21:21:17.359205   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:17.359264   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.363853   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:17.363919   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:17.403245   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:17.403271   51113 cri.go:89] found id: ""
	I1207 21:21:17.403281   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:17.403345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.407694   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:17.407771   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:17.462260   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.462287   51113 cri.go:89] found id: ""
	I1207 21:21:17.462298   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:17.462355   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.467157   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:17.467214   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:17.502206   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:17.502236   51113 cri.go:89] found id: ""
	I1207 21:21:17.502246   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:17.502301   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.507601   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:17.507672   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:17.550248   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:17.550275   51113 cri.go:89] found id: ""
	I1207 21:21:17.550284   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:17.550345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.554817   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:17.554879   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:17.595234   51113 cri.go:89] found id: ""
	I1207 21:21:17.595262   51113 logs.go:284] 0 containers: []
	W1207 21:21:17.595272   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:17.595280   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:17.595331   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:17.657464   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.657491   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.657501   51113 cri.go:89] found id: ""
	I1207 21:21:17.657511   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:17.657566   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.662364   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.667878   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:17.667901   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.716160   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:17.716187   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.770503   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:17.770548   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.836877   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:17.836933   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.881499   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:17.881536   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:17.930792   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:17.930837   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:17.945486   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:17.945519   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:18.087782   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:18.087825   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:18.149272   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:18.149312   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:18.196792   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:18.196829   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:18.243539   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:18.243575   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:18.305424   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:18.305465   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:18.772176   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:18.772213   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:19.916426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.414318   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.728616   51037 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002882 seconds
	I1207 21:21:22.745711   51037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:21:22.772747   51037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:21:23.310807   51037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:21:23.311004   51037 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-950431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:21:23.826933   51037 kubeadm.go:322] [bootstrap-token] Using token: ft70hz.nx8ps5rcldht4kzk
	I1207 21:21:23.828530   51037 out.go:204]   - Configuring RBAC rules ...
	I1207 21:21:23.828676   51037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:21:23.836739   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:21:23.845207   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:21:23.852566   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:21:23.856912   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:21:23.863418   51037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:21:23.881183   51037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:21:24.185664   51037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:21:24.246564   51037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:21:24.246626   51037 kubeadm.go:322] 
	I1207 21:21:24.246741   51037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:21:24.246761   51037 kubeadm.go:322] 
	I1207 21:21:24.246858   51037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:21:24.246868   51037 kubeadm.go:322] 
	I1207 21:21:24.246898   51037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:21:24.246967   51037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:21:24.247047   51037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:21:24.247063   51037 kubeadm.go:322] 
	I1207 21:21:24.247122   51037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:21:24.247132   51037 kubeadm.go:322] 
	I1207 21:21:24.247183   51037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:21:24.247193   51037 kubeadm.go:322] 
	I1207 21:21:24.247259   51037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:21:24.247361   51037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:21:24.247450   51037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:21:24.247461   51037 kubeadm.go:322] 
	I1207 21:21:24.247565   51037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:21:24.247669   51037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:21:24.247678   51037 kubeadm.go:322] 
	I1207 21:21:24.247777   51037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.247910   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:21:24.247941   51037 kubeadm.go:322] 	--control-plane 
	I1207 21:21:24.247951   51037 kubeadm.go:322] 
	I1207 21:21:24.248049   51037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:21:24.248059   51037 kubeadm.go:322] 
	I1207 21:21:24.248150   51037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.248271   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:21:24.249001   51037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:21:24.249031   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:21:24.249041   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:21:24.250938   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:21:21.338084   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:21:21.343250   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:21:21.344871   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:21.344892   51113 api_server.go:131] duration metric: took 4.091697961s to wait for apiserver health ...
	I1207 21:21:21.344901   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:21.344930   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:21.344990   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:21.385908   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:21.385944   51113 cri.go:89] found id: ""
	I1207 21:21:21.385954   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:21.386011   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.390584   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:21.390655   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:21.435206   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:21.435226   51113 cri.go:89] found id: ""
	I1207 21:21:21.435236   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:21.435294   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.441020   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:21.441091   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:21.480294   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:21.480319   51113 cri.go:89] found id: ""
	I1207 21:21:21.480329   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:21.480384   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.484454   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:21.484511   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:21.531792   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:21.531817   51113 cri.go:89] found id: ""
	I1207 21:21:21.531826   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:21.531884   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.536194   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:21.536265   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:21.579784   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:21.579803   51113 cri.go:89] found id: ""
	I1207 21:21:21.579810   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:21.579852   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.583895   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:21.583961   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:21.623350   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:21.623383   51113 cri.go:89] found id: ""
	I1207 21:21:21.623393   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:21.623450   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.628173   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:21.628226   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:21.670522   51113 cri.go:89] found id: ""
	I1207 21:21:21.670549   51113 logs.go:284] 0 containers: []
	W1207 21:21:21.670559   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:21.670565   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:21.670622   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:21.717892   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:21.717918   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:21.717939   51113 cri.go:89] found id: ""
	I1207 21:21:21.717958   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:21.718024   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.724161   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.728796   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:21.728817   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:21.743574   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:21.743599   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:22.158202   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:22.158247   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:22.224569   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:22.224610   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:22.376503   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:22.376539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:22.421207   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:22.421236   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:22.468100   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:22.468130   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:22.514216   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:22.514246   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:22.563190   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:22.563217   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:22.622636   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:22.622673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:22.673280   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:22.673309   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:22.724767   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:22.724799   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:22.787505   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:22.787539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:25.337268   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:25.337297   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.337304   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.337312   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.337319   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.337325   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.337331   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.337338   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.337347   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.337354   51113 system_pods.go:74] duration metric: took 3.99244703s to wait for pod list to return data ...
	I1207 21:21:25.337363   51113 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:25.340607   51113 default_sa.go:45] found service account: "default"
	I1207 21:21:25.340630   51113 default_sa.go:55] duration metric: took 3.261042ms for default service account to be created ...
	I1207 21:21:25.340637   51113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:25.351616   51113 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:25.351640   51113 system_pods.go:89] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.351646   51113 system_pods.go:89] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.351651   51113 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.351656   51113 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.351659   51113 system_pods.go:89] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.351663   51113 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.351670   51113 system_pods.go:89] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.351675   51113 system_pods.go:89] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.351681   51113 system_pods.go:126] duration metric: took 11.04015ms to wait for k8s-apps to be running ...
	I1207 21:21:25.351686   51113 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:25.351725   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:25.368853   51113 system_svc.go:56] duration metric: took 17.156347ms WaitForService to wait for kubelet.
	I1207 21:21:25.368883   51113 kubeadm.go:581] duration metric: took 4m25.557159696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:25.368908   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:25.372224   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:25.372247   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:25.372257   51113 node_conditions.go:105] duration metric: took 3.343495ms to run NodePressure ...
	I1207 21:21:25.372268   51113 start.go:228] waiting for startup goroutines ...
	I1207 21:21:25.372273   51113 start.go:233] waiting for cluster config update ...
	I1207 21:21:25.372282   51113 start.go:242] writing updated cluster config ...
	I1207 21:21:25.372598   51113 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:25.426941   51113 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:25.429177   51113 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-275828" cluster and "default" namespace by default
	I1207 21:21:24.252623   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:21:24.278852   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:21:24.346081   51037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:21:24.346144   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.346161   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=no-preload-950431 minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.458044   51037 ops.go:34] apiserver oom_adj: -16
	I1207 21:21:24.715413   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.801098   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.396467   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.895918   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:26.396185   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.914616   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.915500   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.896260   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.396455   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.896542   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.396551   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.896865   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.395921   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.896782   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.396223   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.896296   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:31.395834   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.414005   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.415580   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.896019   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.395959   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.895826   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.396820   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.896674   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.396109   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.896537   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.396438   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.896709   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.396689   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.896404   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:37.062200   51037 kubeadm.go:1088] duration metric: took 12.716124423s to wait for elevateKubeSystemPrivileges.
	I1207 21:21:37.062237   51037 kubeadm.go:406] StartCluster complete in 5m12.769835709s
	I1207 21:21:37.062255   51037 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.062333   51037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:21:37.064828   51037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.065103   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:21:37.065193   51037 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:21:37.065273   51037 addons.go:69] Setting storage-provisioner=true in profile "no-preload-950431"
	I1207 21:21:37.065291   51037 addons.go:231] Setting addon storage-provisioner=true in "no-preload-950431"
	W1207 21:21:37.065299   51037 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:21:37.065297   51037 addons.go:69] Setting default-storageclass=true in profile "no-preload-950431"
	I1207 21:21:37.065323   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:21:37.065329   51037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950431"
	I1207 21:21:37.065349   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065302   51037 addons.go:69] Setting metrics-server=true in profile "no-preload-950431"
	I1207 21:21:37.065374   51037 addons.go:231] Setting addon metrics-server=true in "no-preload-950431"
	W1207 21:21:37.065388   51037 addons.go:240] addon metrics-server should already be in state true
	I1207 21:21:37.065423   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065737   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065780   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065772   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065821   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.083129   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1207 21:21:37.083593   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I1207 21:21:37.083761   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084047   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084356   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I1207 21:21:37.084566   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084590   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084625   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084645   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084667   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084935   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.084997   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085044   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.085065   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.085381   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085505   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085542   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.085741   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.085909   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085964   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.089134   51037 addons.go:231] Setting addon default-storageclass=true in "no-preload-950431"
	W1207 21:21:37.089153   51037 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:21:37.089180   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.089673   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.089712   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.101048   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1207 21:21:37.101516   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.102279   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.102300   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.102727   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.103618   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.106122   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.107693   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I1207 21:21:37.107843   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I1207 21:21:37.108128   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108521   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108696   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.108709   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.109070   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.109204   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.109227   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.114090   51037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:21:37.109833   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.109949   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.115707   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.115743   51037 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.115765   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:21:37.115789   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.116919   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.119056   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.120429   51037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:21:37.121716   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:21:37.121741   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:21:37.121759   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.119470   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.121830   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.121852   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.120097   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.122062   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.122309   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.122432   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.124738   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.124992   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.125012   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.125346   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.125523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.125647   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.125817   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.136943   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1207 21:21:37.137636   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.138210   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.138233   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.138659   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.138896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.140541   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.140792   51037 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.140808   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:21:37.140824   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.144251   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144616   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.144667   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144856   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.145009   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.145167   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.145260   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.157909   51037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-950431" context rescaled to 1 replicas
	I1207 21:21:37.157965   51037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:21:37.159529   51037 out.go:177] * Verifying Kubernetes components...
	I1207 21:21:33.914686   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:35.916902   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:38.413489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:37.160895   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:37.329265   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:21:37.476842   51037 node_ready.go:35] waiting up to 6m0s for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481433   51037 node_ready.go:49] node "no-preload-950431" has status "Ready":"True"
	I1207 21:21:37.481456   51037 node_ready.go:38] duration metric: took 4.57457ms waiting for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481467   51037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:37.499564   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:37.556110   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:21:37.556142   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:21:37.558917   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.575696   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.653458   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:21:37.653478   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:21:37.782294   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:37.782322   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:21:37.850657   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:38.161232   51037 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1207 21:21:38.734356   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.175402881s)
	I1207 21:21:38.734410   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734420   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734423   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.158690213s)
	I1207 21:21:38.734466   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734482   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734859   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734873   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734860   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.734911   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.734927   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734935   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734913   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735006   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735016   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.735028   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.735166   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735192   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735321   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.735357   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735369   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.772677   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.772700   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.772969   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.773038   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.773055   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.056990   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206289914s)
	I1207 21:21:39.057048   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057064   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057441   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:39.057480   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057502   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057520   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057534   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057809   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057826   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057845   51037 addons.go:467] Verifying addon metrics-server=true in "no-preload-950431"
	I1207 21:21:39.060003   51037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:21:39.061797   51037 addons.go:502] enable addons completed in 1.996609653s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:21:39.690111   51037 pod_ready.go:102] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:40.698712   51037 pod_ready.go:92] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.698739   51037 pod_ready.go:81] duration metric: took 3.199144567s waiting for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.698751   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714087   51037 pod_ready.go:92] pod "coredns-76f75df574-hsjsq" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.714108   51037 pod_ready.go:81] duration metric: took 15.350128ms waiting for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714117   51037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725058   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.725078   51037 pod_ready.go:81] duration metric: took 10.955777ms waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725089   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742099   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.742127   51037 pod_ready.go:81] duration metric: took 17.029172ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742140   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748676   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.748699   51037 pod_ready.go:81] duration metric: took 6.549805ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748713   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988512   51037 pod_ready.go:92] pod "kube-proxy-6v8td" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:41.988537   51037 pod_ready.go:81] duration metric: took 1.239816309s waiting for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988545   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283301   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:42.283330   51037 pod_ready.go:81] duration metric: took 294.777559ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283341   51037 pod_ready.go:38] duration metric: took 4.801864648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:42.283360   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:42.283420   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:42.308983   51037 api_server.go:72] duration metric: took 5.150987572s to wait for apiserver process to appear ...
	I1207 21:21:42.309013   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:42.309036   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:21:42.315006   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:21:42.316220   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:21:42.316240   51037 api_server.go:131] duration metric: took 7.219959ms to wait for apiserver health ...
	I1207 21:21:42.316247   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:42.485186   51037 system_pods.go:59] 9 kube-system pods found
	I1207 21:21:42.485214   51037 system_pods.go:61] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.485218   51037 system_pods.go:61] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.485223   51037 system_pods.go:61] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.485228   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.485234   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.485237   51037 system_pods.go:61] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.485242   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.485251   51037 system_pods.go:61] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.485258   51037 system_pods.go:61] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.485278   51037 system_pods.go:74] duration metric: took 169.025303ms to wait for pod list to return data ...
	I1207 21:21:42.485287   51037 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:42.680542   51037 default_sa.go:45] found service account: "default"
	I1207 21:21:42.680569   51037 default_sa.go:55] duration metric: took 195.272707ms for default service account to be created ...
	I1207 21:21:42.680577   51037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:42.890877   51037 system_pods.go:86] 9 kube-system pods found
	I1207 21:21:42.890927   51037 system_pods.go:89] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.890933   51037 system_pods.go:89] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.890938   51037 system_pods.go:89] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.890942   51037 system_pods.go:89] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.890946   51037 system_pods.go:89] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.890950   51037 system_pods.go:89] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.890954   51037 system_pods.go:89] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.890960   51037 system_pods.go:89] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.890965   51037 system_pods.go:89] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.890973   51037 system_pods.go:126] duration metric: took 210.38383ms to wait for k8s-apps to be running ...
	I1207 21:21:42.890979   51037 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:42.891021   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:42.907279   51037 system_svc.go:56] duration metric: took 16.290689ms WaitForService to wait for kubelet.
	I1207 21:21:42.907306   51037 kubeadm.go:581] duration metric: took 5.749318034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:42.907328   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:43.081361   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:43.081390   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:43.081401   51037 node_conditions.go:105] duration metric: took 174.067442ms to run NodePressure ...
	I1207 21:21:43.081412   51037 start.go:228] waiting for startup goroutines ...
	I1207 21:21:43.081420   51037 start.go:233] waiting for cluster config update ...
	I1207 21:21:43.081433   51037 start.go:242] writing updated cluster config ...
	I1207 21:21:43.081691   51037 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:43.131409   51037 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1207 21:21:43.133483   51037 out.go:177] * Done! kubectl is now configured to use "no-preload-950431" cluster and "default" namespace by default
	I1207 21:21:40.414676   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:42.913795   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:44.914599   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:47.414431   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:49.913391   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:51.914426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:53.915196   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:55.923342   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:58.413783   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:00.414241   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:02.414435   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:04.913358   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:06.913909   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:08.915098   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:11.414320   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:13.414489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:15.913521   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:18.415215   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:19.107244   50270 pod_ready.go:81] duration metric: took 4m0.000150933s waiting for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:19.107300   50270 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:22:19.107323   50270 pod_ready.go:38] duration metric: took 4m1.199790563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:19.107355   50270 kubeadm.go:640] restartCluster took 5m20.261390035s
	W1207 21:22:19.107437   50270 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:22:19.107470   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:22:26.124587   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (7.017092462s)
	I1207 21:22:26.124664   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:26.139323   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:22:26.150243   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:22:26.164289   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:22:26.164356   50270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 21:22:26.390137   50270 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:22:39.046001   50270 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1207 21:22:39.046063   50270 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:22:39.046164   50270 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:22:39.046322   50270 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:22:39.046454   50270 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:22:39.046581   50270 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:22:39.046685   50270 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:22:39.046759   50270 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1207 21:22:39.046836   50270 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:22:39.048426   50270 out.go:204]   - Generating certificates and keys ...
	I1207 21:22:39.048532   50270 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:22:39.048617   50270 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:22:39.048713   50270 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:22:39.048808   50270 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:22:39.048899   50270 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:22:39.048977   50270 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:22:39.049066   50270 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:22:39.049151   50270 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:22:39.049254   50270 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:22:39.049341   50270 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:22:39.049396   50270 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:22:39.049496   50270 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:22:39.049578   50270 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:22:39.049671   50270 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:22:39.049758   50270 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:22:39.049829   50270 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:22:39.049884   50270 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:22:39.051499   50270 out.go:204]   - Booting up control plane ...
	I1207 21:22:39.051604   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:22:39.051706   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:22:39.051778   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:22:39.051841   50270 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:22:39.052043   50270 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:22:39.052137   50270 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502878 seconds
	I1207 21:22:39.052296   50270 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:22:39.052458   50270 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:22:39.052537   50270 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:22:39.052714   50270 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-483745 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 21:22:39.052802   50270 kubeadm.go:322] [bootstrap-token] Using token: 88595b.vk24k0k7lcyxvxlg
	I1207 21:22:39.054142   50270 out.go:204]   - Configuring RBAC rules ...
	I1207 21:22:39.054250   50270 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:22:39.054369   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:22:39.054470   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:22:39.054565   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:22:39.054675   50270 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:22:39.054740   50270 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:22:39.054805   50270 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:22:39.054813   50270 kubeadm.go:322] 
	I1207 21:22:39.054905   50270 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:22:39.054917   50270 kubeadm.go:322] 
	I1207 21:22:39.054996   50270 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:22:39.055004   50270 kubeadm.go:322] 
	I1207 21:22:39.055031   50270 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:22:39.055107   50270 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:22:39.055174   50270 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:22:39.055187   50270 kubeadm.go:322] 
	I1207 21:22:39.055254   50270 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:22:39.055366   50270 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:22:39.055467   50270 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:22:39.055476   50270 kubeadm.go:322] 
	I1207 21:22:39.055565   50270 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1207 21:22:39.055655   50270 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:22:39.055663   50270 kubeadm.go:322] 
	I1207 21:22:39.055776   50270 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.055929   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:22:39.055969   50270 kubeadm.go:322]     --control-plane 	  
	I1207 21:22:39.055979   50270 kubeadm.go:322] 
	I1207 21:22:39.056099   50270 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:22:39.056111   50270 kubeadm.go:322] 
	I1207 21:22:39.056215   50270 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.056371   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:22:39.056402   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:22:39.056414   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:22:39.058073   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:22:39.059659   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:22:39.078052   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:22:39.118479   50270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:22:39.118540   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=old-k8s-version-483745 minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.118551   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.149391   50270 ops.go:34] apiserver oom_adj: -16
	I1207 21:22:39.334606   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.476182   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.075027   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.574693   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.074497   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.575214   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.075168   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.575162   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.074671   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.575406   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.074823   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.574597   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.575119   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.075437   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.575138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.575171   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.074939   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.574679   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.075065   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.574571   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.074553   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.575129   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.075320   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.574806   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.075136   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.575144   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.075139   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.575394   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.075185   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.274051   50270 kubeadm.go:1088] duration metric: took 15.155559482s to wait for elevateKubeSystemPrivileges.
	I1207 21:22:54.274092   50270 kubeadm.go:406] StartCluster complete in 5m55.488226201s
	I1207 21:22:54.274140   50270 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.274247   50270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:22:54.276679   50270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.276902   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:22:54.276991   50270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:22:54.277064   50270 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277090   50270 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-483745"
	W1207 21:22:54.277103   50270 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:22:54.277101   50270 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277089   50270 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277116   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:22:54.277127   50270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-483745"
	I1207 21:22:54.277152   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277119   50270 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-483745"
	W1207 21:22:54.277169   50270 addons.go:240] addon metrics-server should already be in state true
	I1207 21:22:54.277208   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277529   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277564   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277573   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277581   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277591   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277612   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.293696   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1207 21:22:54.293908   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1207 21:22:54.294118   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.294622   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.294642   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.294656   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.295100   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.295119   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.295182   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295512   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295671   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.295709   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1207 21:22:54.295752   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.295791   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.296131   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.296662   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.296681   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.297077   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.297597   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.297635   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.299605   50270 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-483745"
	W1207 21:22:54.299630   50270 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:22:54.299658   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.300047   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.300087   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.314531   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1207 21:22:54.315168   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.315718   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.315804   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1207 21:22:54.315809   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.316447   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.316491   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.316657   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.316979   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.317005   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.317340   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.317887   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.317945   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.319086   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.321272   50270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:22:54.320074   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1207 21:22:54.322834   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:22:54.322849   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:22:54.322863   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.323218   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.323677   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.323689   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.323997   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.324166   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.326460   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.328172   50270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:22:54.327148   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.328366   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.329567   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.329588   50270 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.329593   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.329600   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:22:54.329613   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.329725   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.329909   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.330088   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.333435   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334161   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.334192   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334480   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.334786   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.334959   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.335091   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.336340   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I1207 21:22:54.336672   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.337021   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.337034   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.337316   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.337486   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.338808   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.339043   50270 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.339053   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:22:54.339064   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.341591   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.341937   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.341960   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.342127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.342285   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.342453   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.342592   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.385908   50270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-483745" context rescaled to 1 replicas
	I1207 21:22:54.385959   50270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:22:54.387637   50270 out.go:177] * Verifying Kubernetes components...
	I1207 21:22:54.388616   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:54.604286   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.671574   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:22:54.671601   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:22:54.752688   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:22:54.752714   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:22:54.792943   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.847458   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.847489   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:22:54.916698   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.931860   50270 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:54.931924   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:22:55.152010   50270 node_ready.go:49] node "old-k8s-version-483745" has status "Ready":"True"
	I1207 21:22:55.152041   50270 node_ready.go:38] duration metric: took 220.147741ms waiting for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:55.152055   50270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:55.356283   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:55.654243   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049922238s)
	I1207 21:22:55.654296   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654313   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.654661   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.654687   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.654694   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.654703   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654715   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.655010   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.655052   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.693855   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.693876   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.694176   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.694197   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.927642   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13465835s)
	I1207 21:22:55.927714   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.927731   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928056   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928076   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.928087   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.928096   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.928413   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928428   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.033797   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117050773s)
	I1207 21:22:56.033845   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.101898699s)
	I1207 21:22:56.033881   50270 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1207 21:22:56.033850   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.033918   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034207   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034220   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034229   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.034236   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034460   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034480   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034516   50270 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-483745"
	I1207 21:22:56.036701   50270 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1207 21:22:56.038078   50270 addons.go:502] enable addons completed in 1.76109636s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1207 21:22:57.718454   50270 pod_ready.go:102] pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:58.708880   50270 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708910   50270 pod_ready.go:81] duration metric: took 3.352602717s waiting for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:58.708920   50270 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708930   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715179   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.715205   50270 pod_ready.go:81] duration metric: took 6.268335ms waiting for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715219   50270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720511   50270 pod_ready.go:92] pod "kube-proxy-42fzb" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.720526   50270 pod_ready.go:81] duration metric: took 5.302238ms waiting for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720544   50270 pod_ready.go:38] duration metric: took 3.568467628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:58.720558   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:22:58.720609   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:22:58.737687   50270 api_server.go:72] duration metric: took 4.351680673s to wait for apiserver process to appear ...
	I1207 21:22:58.737712   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:22:58.737730   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:22:58.744722   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:22:58.745867   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:22:58.745887   50270 api_server.go:131] duration metric: took 8.167725ms to wait for apiserver health ...
	I1207 21:22:58.745897   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:22:58.750259   50270 system_pods.go:59] 4 kube-system pods found
	I1207 21:22:58.750278   50270 system_pods.go:61] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.750283   50270 system_pods.go:61] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.750292   50270 system_pods.go:61] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.750306   50270 system_pods.go:61] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.750319   50270 system_pods.go:74] duration metric: took 4.415504ms to wait for pod list to return data ...
	I1207 21:22:58.750328   50270 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:22:58.753151   50270 default_sa.go:45] found service account: "default"
	I1207 21:22:58.753173   50270 default_sa.go:55] duration metric: took 2.836309ms for default service account to be created ...
	I1207 21:22:58.753181   50270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:22:58.757164   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.757188   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.757195   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.757212   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.757223   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.757246   50270 retry.go:31] will retry after 195.542562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:58.957411   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.957443   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.957451   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.957461   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.957471   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.957494   50270 retry.go:31] will retry after 294.291725ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.264559   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.264599   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.264608   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.264620   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.264632   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.264651   50270 retry.go:31] will retry after 392.704433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.663939   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.663967   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.663973   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.663979   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.663985   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.664003   50270 retry.go:31] will retry after 598.787872ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.268415   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.268441   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.268447   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.268453   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.268458   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.268472   50270 retry.go:31] will retry after 554.6659ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.829267   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.829293   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.829299   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.829305   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.829309   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.829325   50270 retry.go:31] will retry after 832.708436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:01.667497   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:01.667526   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:01.667532   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:01.667539   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:01.667543   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:01.667560   50270 retry.go:31] will retry after 824.504206ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:02.497009   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:02.497033   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:02.497038   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:02.497045   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:02.497049   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:02.497064   50270 retry.go:31] will retry after 1.335460815s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:03.837788   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:03.837816   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:03.837821   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:03.837828   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:03.837833   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:03.837848   50270 retry.go:31] will retry after 1.185883705s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:05.028679   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:05.028712   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:05.028721   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:05.028731   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:05.028738   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:05.028758   50270 retry.go:31] will retry after 2.162817833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:07.196435   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:07.196468   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:07.196476   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:07.196485   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:07.196493   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:07.196512   50270 retry.go:31] will retry after 2.853202831s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:10.054277   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:10.054303   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:10.054308   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:10.054315   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:10.054320   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:10.054335   50270 retry.go:31] will retry after 3.392213767s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:13.452019   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:13.452046   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:13.452052   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:13.452059   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:13.452064   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:13.452081   50270 retry.go:31] will retry after 3.42315118s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:16.882830   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:16.882856   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:16.882861   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:16.882868   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:16.882873   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:16.882887   50270 retry.go:31] will retry after 3.42232982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:20.310740   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:20.310766   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:20.310771   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:20.310780   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:20.310785   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:20.310801   50270 retry.go:31] will retry after 6.110306117s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:26.426492   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:26.426520   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:26.426525   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:26.426532   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:26.426537   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:26.426554   50270 retry.go:31] will retry after 5.458076236s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:31.890544   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:31.890575   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:31.890580   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:31.890589   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:31.890593   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:31.890611   50270 retry.go:31] will retry after 10.030622922s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:41.928589   50270 system_pods.go:86] 6 kube-system pods found
	I1207 21:23:41.928622   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:41.928630   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:41.928637   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:41.928642   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:41.928651   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:41.928659   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:41.928677   50270 retry.go:31] will retry after 11.183539963s: missing components: kube-controller-manager, kube-scheduler
	I1207 21:23:53.119257   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:23:53.119284   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:53.119292   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:53.119298   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:53.119304   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Pending
	I1207 21:23:53.119309   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:53.119315   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:23:53.119325   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:53.119332   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:53.119353   50270 retry.go:31] will retry after 13.123307809s: missing components: kube-controller-manager
	I1207 21:24:06.249016   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:24:06.249042   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:24:06.249048   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:24:06.249054   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:24:06.249059   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Running
	I1207 21:24:06.249064   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:24:06.249068   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:24:06.249074   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:24:06.249079   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:24:06.249087   50270 system_pods.go:126] duration metric: took 1m7.495900916s to wait for k8s-apps to be running ...
	I1207 21:24:06.249092   50270 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:24:06.249137   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:24:06.265801   50270 system_svc.go:56] duration metric: took 16.700976ms WaitForService to wait for kubelet.
	I1207 21:24:06.265820   50270 kubeadm.go:581] duration metric: took 1m11.879821949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:24:06.265837   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:24:06.269326   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:24:06.269346   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:24:06.269356   50270 node_conditions.go:105] duration metric: took 3.51576ms to run NodePressure ...
	I1207 21:24:06.269366   50270 start.go:228] waiting for startup goroutines ...
	I1207 21:24:06.269371   50270 start.go:233] waiting for cluster config update ...
	I1207 21:24:06.269384   50270 start.go:242] writing updated cluster config ...
	I1207 21:24:06.269660   50270 ssh_runner.go:195] Run: rm -f paused
	I1207 21:24:06.317992   50270 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1207 21:24:06.320122   50270 out.go:177] 
	W1207 21:24:06.321437   50270 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1207 21:24:06.322708   50270 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1207 21:24:06.324092   50270 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-483745" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:15:53 UTC, ends at Thu 2023-12-07 21:30:44 UTC. --
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.789534038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984644789516125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b45e1aed-07cc-41d6-8bce-11827c72afc6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.790509085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e6ba90fb-5677-433c-9396-57f59ef5da35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.790577733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6ba90fb-5677-433c-9396-57f59ef5da35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.790809044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6ba90fb-5677-433c-9396-57f59ef5da35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.833574997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=85634a5d-7916-463c-88f3-ab2ce6e05cc0 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.833633763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=85634a5d-7916-463c-88f3-ab2ce6e05cc0 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.834821652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4079ae2f-fe0e-4e18-8c30-501da6718000 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.835127179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984644835116138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=4079ae2f-fe0e-4e18-8c30-501da6718000 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.835867040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d33be3b-1e19-4eba-a9e4-8a8275a94776 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.835912305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d33be3b-1e19-4eba-a9e4-8a8275a94776 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.836055856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d33be3b-1e19-4eba-a9e4-8a8275a94776 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.878530347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dd34bc80-674f-42af-8024-c5f7091792ed name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.878606265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dd34bc80-674f-42af-8024-c5f7091792ed name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.879876547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=66d9067b-c94a-487b-87b4-c4bf16aa3375 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.880382148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984644880362024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=66d9067b-c94a-487b-87b4-c4bf16aa3375 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.881077304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c064eda2-fe83-4692-b316-04509b7264a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.881172084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c064eda2-fe83-4692-b316-04509b7264a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.881454944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c064eda2-fe83-4692-b316-04509b7264a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.922624127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cda92c07-0668-44d5-ad97-efcc1f6c2463 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.922735769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cda92c07-0668-44d5-ad97-efcc1f6c2463 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.925410814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ff1b4a98-e229-4ed3-add9-9a27a613ecc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.925878004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984644925856810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ff1b4a98-e229-4ed3-add9-9a27a613ecc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.926795350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=938f69e4-519c-4c0c-8c5f-d3e19036005b name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.926890811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=938f69e4-519c-4c0c-8c5f-d3e19036005b name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:30:44 no-preload-950431 crio[713]: time="2023-12-07 21:30:44.927103478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=938f69e4-519c-4c0c-8c5f-d3e19036005b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a94bd233c537       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   954f69cb07067       storage-provisioner
	c7b82c33266c8       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff   9 minutes ago       Running             kube-proxy                0                   336e55d5fcc59       kube-proxy-6v8td
	54a649b860356       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2b4ad45853885       coredns-76f75df574-cz2xd
	7b11510039f6a       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956   9 minutes ago       Running             kube-apiserver            2                   e541022f9d01c       kube-apiserver-no-preload-950431
	aa3d15a27f8f9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   68aa303187881       etcd-no-preload-950431
	5cde6958dd3c4       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09   9 minutes ago       Running             kube-controller-manager   2                   e08ecb9106195       kube-controller-manager-no-preload-950431
	13af9806c4e50       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542   9 minutes ago       Running             kube-scheduler            2                   7524486cd2b13       kube-scheduler-no-preload-950431
	
	* 
	* ==> coredns [54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50046 - 42970 "HINFO IN 9151674908356452295.5213838015451573474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0223671s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-950431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-950431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=no-preload-950431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-950431
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:30:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:26:50 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:26:50 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:26:50 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:26:50 +0000   Thu, 07 Dec 2023 21:21:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.100
	  Hostname:    no-preload-950431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fc7293a6643464ba6a5d7a0a1cbcb0b
	  System UUID:                8fc7293a-6643-464b-a6a5-d7a0a1cbcb0b
	  Boot ID:                    affc1820-b0ed-4b55-b3dd-646f094aba6b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-cz2xd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-no-preload-950431                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-950431             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-950431    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-proxy-6v8td                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-no-preload-950431             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-ffkls              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m30s)  kubelet          Node no-preload-950431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m30s)  kubelet          Node no-preload-950431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m30s)  kubelet          Node no-preload-950431 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-950431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-950431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-950431 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node no-preload-950431 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m11s                  kubelet          Node no-preload-950431 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-950431 event: Registered Node no-preload-950431 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070110] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.588205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.528901] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150009] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.465046] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 7 21:16] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.115985] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.182337] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.137630] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.262103] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +30.083739] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +22.406614] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 7 21:21] systemd-fstab-generator[3935]: Ignoring "noauto" for root device
	[  +9.323110] systemd-fstab-generator[4256]: Ignoring "noauto" for root device
	[ +13.305834] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.349782] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72] <==
	* {"level":"info","ts":"2023-12-07T21:21:18.282542Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.100:2380"}
	{"level":"info","ts":"2023-12-07T21:21:18.283494Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8a93cffd6fd293f3","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-12-07T21:21:18.283729Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T21:21:18.283774Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T21:21:18.283782Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T21:21:18.28581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 switched to configuration voters=(9985553486220268531)"}
	{"level":"info","ts":"2023-12-07T21:21:18.286046Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6ddf9aff62617c59","local-member-id":"8a93cffd6fd293f3","added-peer-id":"8a93cffd6fd293f3","added-peer-peer-urls":["https://192.168.50.100:2380"]}
	{"level":"info","ts":"2023-12-07T21:21:19.261161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-07T21:21:19.261281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T21:21:19.261319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 received MsgPreVoteResp from 8a93cffd6fd293f3 at term 1"}
	{"level":"info","ts":"2023-12-07T21:21:19.261333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.261339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 received MsgVoteResp from 8a93cffd6fd293f3 at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.261347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.261356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8a93cffd6fd293f3 elected leader 8a93cffd6fd293f3 at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.262852Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8a93cffd6fd293f3","local-member-attributes":"{Name:no-preload-950431 ClientURLs:[https://192.168.50.100:2379]}","request-path":"/0/members/8a93cffd6fd293f3/attributes","cluster-id":"6ddf9aff62617c59","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:21:19.263038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:21:19.263494Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.263654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:21:19.264044Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:21:19.264086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:21:19.266028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.100:2379"}
	{"level":"info","ts":"2023-12-07T21:21:19.266158Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6ddf9aff62617c59","local-member-id":"8a93cffd6fd293f3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.266303Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.266358Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.268553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:30:45 up 15 min,  0 users,  load average: 0.34, 0.31, 0.26
	Linux no-preload-950431 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e] <==
	* I1207 21:24:39.642847       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:26:20.786769       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:26:20.786944       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1207 21:26:21.787671       1 handler_proxy.go:93] no RequestInfo found in the context
	W1207 21:26:21.787832       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:26:21.788065       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:26:21.788147       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1207 21:26:21.788077       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:26:21.790200       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:27:21.789322       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:27:21.789453       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:27:21.789466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:27:21.790555       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:27:21.790705       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:27:21.790732       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:29:21.790016       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:29:21.790140       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:29:21.790152       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:29:21.791302       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:29:21.791515       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:29:21.791566       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f] <==
	* I1207 21:25:06.420060       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:25:35.979906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:25:36.429441       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:26:05.985622       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:06.439931       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:26:35.991873       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:26:36.450644       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:27:05.999705       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:06.459477       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:27:36.006153       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:27:36.470115       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:27:37.304284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="864.402µs"
	I1207 21:27:50.304868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="142.941µs"
	E1207 21:28:06.011500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:06.482339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:28:36.016420       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:28:36.491313       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:06.024759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:06.502379       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:29:36.031036       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:29:36.510486       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:30:06.035875       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:30:06.518705       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:30:36.041333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:30:36.528011       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54] <==
	* I1207 21:21:40.859296       1 server_others.go:72] "Using iptables proxy"
	I1207 21:21:40.877497       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.100"]
	I1207 21:21:40.932527       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1207 21:21:40.932606       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:21:40.932626       1 server_others.go:168] "Using iptables Proxier"
	I1207 21:21:40.936719       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:21:40.936910       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1207 21:21:40.936953       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:21:40.939612       1 config.go:188] "Starting service config controller"
	I1207 21:21:40.939680       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:21:40.939718       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:21:40.939735       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:21:40.944712       1 config.go:315] "Starting node config controller"
	I1207 21:21:40.944785       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:21:41.040652       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:21:41.040715       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:21:41.044872       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9] <==
	* W1207 21:21:20.803505       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:20.803683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:20.805092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:20.806786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:21.606481       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 21:21:21.606545       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 21:21:21.686916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:21.687044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:21.761903       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:21:21.762448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:21:21.796131       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:21:21.796396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 21:21:21.798748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:21:21.798808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:21:21.961495       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:21.961552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:22.033518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:21:22.033626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 21:21:22.053600       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:21:22.053660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:21:22.063432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:22.063541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:22.101425       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:22.101551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1207 21:21:23.996140       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:15:53 UTC, ends at Thu 2023-12-07 21:30:45 UTC. --
	Dec 07 21:27:50 no-preload-950431 kubelet[4263]: E1207 21:27:50.287690    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:28:05 no-preload-950431 kubelet[4263]: E1207 21:28:05.286803    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:28:19 no-preload-950431 kubelet[4263]: E1207 21:28:19.286598    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:28:24 no-preload-950431 kubelet[4263]: E1207 21:28:24.309679    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:28:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:28:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:28:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:28:33 no-preload-950431 kubelet[4263]: E1207 21:28:33.285912    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:28:45 no-preload-950431 kubelet[4263]: E1207 21:28:45.286000    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:29:00 no-preload-950431 kubelet[4263]: E1207 21:29:00.287454    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:29:14 no-preload-950431 kubelet[4263]: E1207 21:29:14.286782    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:29:24 no-preload-950431 kubelet[4263]: E1207 21:29:24.310619    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:29:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:29:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:29:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:29:28 no-preload-950431 kubelet[4263]: E1207 21:29:28.287598    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:29:43 no-preload-950431 kubelet[4263]: E1207 21:29:43.286525    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:29:55 no-preload-950431 kubelet[4263]: E1207 21:29:55.286523    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:30:10 no-preload-950431 kubelet[4263]: E1207 21:30:10.290448    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:30:22 no-preload-950431 kubelet[4263]: E1207 21:30:22.286374    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:30:24 no-preload-950431 kubelet[4263]: E1207 21:30:24.310453    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:30:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:30:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:30:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:30:36 no-preload-950431 kubelet[4263]: E1207 21:30:36.286752    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	
	* 
	* ==> storage-provisioner [9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558] <==
	* I1207 21:21:40.784738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:21:40.804638       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:21:40.804750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:21:40.814504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:21:40.814715       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039!
	I1207 21:21:40.815867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbb7d48e-5ba0-415f-b255-9b1b2a4e906e", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039 became leader
	I1207 21:21:40.915412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-950431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ffkls
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls: exit status 1 (82.051563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ffkls" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:24:28.941655   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 21:26:05.939625   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:26:41.700224   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 21:27:28.986892   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:29:28.942150   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-483745 -n old-k8s-version-483745
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:33:06.90939121 +0000 UTC m=+5511.077536182
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-483745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-483745 logs -n 25: (1.686749456s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-620116 -- sudo                         | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-620116                                 | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:12:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:12:54.827966   51113 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:12:54.828121   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828131   51113 out.go:309] Setting ErrFile to fd 2...
	I1207 21:12:54.828138   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828309   51113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:12:54.828894   51113 out.go:303] Setting JSON to false
	I1207 21:12:54.829778   51113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6921,"bootTime":1701976654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:12:54.829872   51113 start.go:138] virtualization: kvm guest
	I1207 21:12:54.832359   51113 out.go:177] * [default-k8s-diff-port-275828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:12:54.833958   51113 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:12:54.833997   51113 notify.go:220] Checking for updates...
	I1207 21:12:54.835484   51113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:12:54.837345   51113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:12:54.838716   51113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:12:54.840105   51113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:12:54.841497   51113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:12:54.843170   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:12:54.843587   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.843638   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.857987   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I1207 21:12:54.858345   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.858826   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.858846   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.859141   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.859317   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.859528   51113 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:12:54.859797   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.859827   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.873523   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1207 21:12:54.873866   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.874374   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.874399   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.874726   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.874907   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.906909   51113 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:12:54.908496   51113 start.go:298] selected driver: kvm2
	I1207 21:12:54.908515   51113 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.908626   51113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:12:54.909287   51113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.909431   51113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:12:54.924711   51113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:12:54.925077   51113 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:12:54.925136   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:12:54.925149   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:12:54.925174   51113 start_flags.go:323] config:
	{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-27582
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.925311   51113 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.927216   51113 out.go:177] * Starting control plane node default-k8s-diff-port-275828 in cluster default-k8s-diff-port-275828
	I1207 21:12:51.859250   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:12:51.859366   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:12:51.859440   51037 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859507   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:12:51.859492   51037 cache.go:107] acquiring lock: {Name:mk57eae37995939df6ffd0df03832314e9e6100e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859493   51037 cache.go:107] acquiring lock: {Name:mk5a91936dc04372c96de7514149d2b4b0d17dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859522   51037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.402µs
	I1207 21:12:51.859538   51037 cache.go:107] acquiring lock: {Name:mk4c716c1104ca016c5e335d1cbf204f19d0197f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859560   51037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:12:51.859581   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 21:12:51.859591   51037 start.go:365] acquiring machines lock for no-preload-950431: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:12:51.859593   51037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 111.482µs
	I1207 21:12:51.859611   51037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859596   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 21:12:51.859564   51037 cache.go:107] acquiring lock: {Name:mke02250ffd1d3b6fb4470dd05093397053b289d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859627   51037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 139.857µs
	I1207 21:12:51.859637   51037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859588   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 21:12:51.859647   51037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 112.196µs
	I1207 21:12:51.859621   51037 cache.go:107] acquiring lock: {Name:mk2a1c8afaf74efaf0daac8bf102ee63aa4b5154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859664   51037 cache.go:107] acquiring lock: {Name:mk042626599761dccdc47fcf8ee95d59d24917b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859660   51037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 21:12:51.859443   51037 cache.go:107] acquiring lock: {Name:mk69e12850117516cff168d811605a739d29808c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859701   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 21:12:51.859715   51037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 185.872µs
	I1207 21:12:51.859736   51037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 21:12:51.859728   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 21:12:51.859750   51037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 313.668µs
	I1207 21:12:51.859758   51037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859796   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 21:12:51.859809   51037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 179.42µs
	I1207 21:12:51.859823   51037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859808   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1207 21:12:51.859910   51037 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 310.345µs
	I1207 21:12:51.859931   51037 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1207 21:12:51.859947   51037 cache.go:87] Successfully saved all images to host disk.
	I1207 21:12:57.714205   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:12:54.928473   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:12:54.928503   51113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:12:54.928516   51113 cache.go:56] Caching tarball of preloaded images
	I1207 21:12:54.928608   51113 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:12:54.928621   51113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:12:54.928718   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:12:54.928893   51113 start.go:365] acquiring machines lock for default-k8s-diff-port-275828: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:13:00.786234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:06.866234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:09.938211   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:16.018206   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:19.090196   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:25.170164   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:28.242299   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:34.322194   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:37.394241   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:43.474183   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:46.546186   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:52.626214   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:55.698176   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:01.778218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:04.850228   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:10.930239   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:14.002222   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:20.082270   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:23.154237   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:29.234226   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:32.306242   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:38.386218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:41.458157   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:47.538219   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:50.610223   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:56.690260   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:59.766215   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:05.842220   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:08.914154   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:14.994193   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:18.066232   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:21.070365   50624 start.go:369] acquired machines lock for "embed-certs-598346" in 3m44.734224905s
	I1207 21:15:21.070421   50624 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:21.070427   50624 fix.go:54] fixHost starting: 
	I1207 21:15:21.070755   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:21.070787   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:21.085298   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I1207 21:15:21.085643   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:21.086150   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:15:21.086172   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:21.086491   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:21.086681   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:21.086828   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:15:21.088256   50624 fix.go:102] recreateIfNeeded on embed-certs-598346: state=Stopped err=<nil>
	I1207 21:15:21.088283   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	W1207 21:15:21.088465   50624 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:21.090020   50624 out.go:177] * Restarting existing kvm2 VM for "embed-certs-598346" ...
	I1207 21:15:21.091364   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Start
	I1207 21:15:21.091521   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:15:21.092215   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:15:21.092551   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:15:21.092938   50624 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:15:21.093647   50624 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:15:21.067977   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:21.068024   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:15:21.070214   50270 machine.go:91] provisioned docker machine in 4m37.409386757s
	I1207 21:15:21.070272   50270 fix.go:56] fixHost completed within 4m37.430493841s
	I1207 21:15:21.070280   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 4m37.43051315s
	W1207 21:15:21.070299   50270 start.go:694] error starting host: provision: host is not running
	W1207 21:15:21.070399   50270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 21:15:21.070408   50270 start.go:709] Will try again in 5 seconds ...
	I1207 21:15:22.319220   50624 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:15:22.320059   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.320432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.320505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.320416   51516 retry.go:31] will retry after 306.732639ms: waiting for machine to come up
	I1207 21:15:22.629026   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.629495   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.629523   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.629465   51516 retry.go:31] will retry after 244.665765ms: waiting for machine to come up
	I1207 21:15:22.875896   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.876248   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.876275   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.876210   51516 retry.go:31] will retry after 389.522298ms: waiting for machine to come up
	I1207 21:15:23.267728   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.268119   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.268140   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.268064   51516 retry.go:31] will retry after 521.34699ms: waiting for machine to come up
	I1207 21:15:23.790614   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.791043   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.791067   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.791002   51516 retry.go:31] will retry after 493.71234ms: waiting for machine to come up
	I1207 21:15:24.286698   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:24.287121   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:24.287145   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:24.287061   51516 retry.go:31] will retry after 736.984501ms: waiting for machine to come up
	I1207 21:15:25.025941   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:25.026294   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:25.026317   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:25.026256   51516 retry.go:31] will retry after 1.06643424s: waiting for machine to come up
	I1207 21:15:26.093760   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:26.094266   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:26.094306   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:26.094211   51516 retry.go:31] will retry after 1.226791228s: waiting for machine to come up
	I1207 21:15:26.072827   50270 start.go:365] acquiring machines lock for old-k8s-version-483745: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:15:27.322536   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:27.322912   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:27.322940   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:27.322857   51516 retry.go:31] will retry after 1.246504696s: waiting for machine to come up
	I1207 21:15:28.571241   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:28.571651   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:28.571677   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:28.571606   51516 retry.go:31] will retry after 2.084958391s: waiting for machine to come up
	I1207 21:15:30.658654   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:30.659047   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:30.659080   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:30.658990   51516 retry.go:31] will retry after 2.104944011s: waiting for machine to come up
	I1207 21:15:32.765669   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:32.766136   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:32.766167   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:32.766076   51516 retry.go:31] will retry after 3.05038185s: waiting for machine to come up
	I1207 21:15:35.819082   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:35.819446   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:35.819477   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:35.819399   51516 retry.go:31] will retry after 3.445969037s: waiting for machine to come up
	I1207 21:15:40.686593   51037 start.go:369] acquired machines lock for "no-preload-950431" in 2m48.82697748s
	I1207 21:15:40.686639   51037 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:40.686646   51037 fix.go:54] fixHost starting: 
	I1207 21:15:40.687011   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:40.687043   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:40.703294   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I1207 21:15:40.703682   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:40.704245   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:15:40.704276   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:40.704620   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:40.704792   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:15:40.704938   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:15:40.706394   51037 fix.go:102] recreateIfNeeded on no-preload-950431: state=Stopped err=<nil>
	I1207 21:15:40.706420   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	W1207 21:15:40.706593   51037 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:40.709148   51037 out.go:177] * Restarting existing kvm2 VM for "no-preload-950431" ...
	I1207 21:15:39.269367   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269776   50624 main.go:141] libmachine: (embed-certs-598346) Found IP for machine: 192.168.72.180
	I1207 21:15:39.269802   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has current primary IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269808   50624 main.go:141] libmachine: (embed-certs-598346) Reserving static IP address...
	I1207 21:15:39.270234   50624 main.go:141] libmachine: (embed-certs-598346) Reserved static IP address: 192.168.72.180
	I1207 21:15:39.270265   50624 main.go:141] libmachine: (embed-certs-598346) Waiting for SSH to be available...
	I1207 21:15:39.270279   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.270308   50624 main.go:141] libmachine: (embed-certs-598346) DBG | skip adding static IP to network mk-embed-certs-598346 - found existing host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"}
	I1207 21:15:39.270325   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Getting to WaitForSSH function...
	I1207 21:15:39.272292   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272639   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.272674   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272773   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH client type: external
	I1207 21:15:39.272827   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa (-rw-------)
	I1207 21:15:39.272869   50624 main.go:141] libmachine: (embed-certs-598346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:15:39.272887   50624 main.go:141] libmachine: (embed-certs-598346) DBG | About to run SSH command:
	I1207 21:15:39.272903   50624 main.go:141] libmachine: (embed-certs-598346) DBG | exit 0
	I1207 21:15:39.363326   50624 main.go:141] libmachine: (embed-certs-598346) DBG | SSH cmd err, output: <nil>: 
	I1207 21:15:39.363757   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:15:39.364301   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.366828   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367157   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.367206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367459   50624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:15:39.367693   50624 machine.go:88] provisioning docker machine ...
	I1207 21:15:39.367713   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:39.367918   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368085   50624 buildroot.go:166] provisioning hostname "embed-certs-598346"
	I1207 21:15:39.368104   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368241   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.370443   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.370771   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.370798   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.371044   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.371192   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371358   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371507   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.371660   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.372058   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.372078   50624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-598346 && echo "embed-certs-598346" | sudo tee /etc/hostname
	I1207 21:15:39.498370   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-598346
	
	I1207 21:15:39.498394   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.501284   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501691   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.501711   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501952   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.502135   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502267   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502432   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.502604   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.503052   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.503091   50624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-598346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-598346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-598346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:15:39.625683   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:39.625713   50624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:15:39.625735   50624 buildroot.go:174] setting up certificates
	I1207 21:15:39.625748   50624 provision.go:83] configureAuth start
	I1207 21:15:39.625760   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.626074   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.628753   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629102   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.629125   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629277   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.631206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631478   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.631507   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631632   50624 provision.go:138] copyHostCerts
	I1207 21:15:39.631682   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:15:39.631698   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:15:39.631763   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:15:39.631844   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:15:39.631852   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:15:39.631874   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:15:39.631922   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:15:39.631928   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:15:39.631951   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:15:39.631993   50624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.embed-certs-598346 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-598346]
	I1207 21:15:39.968036   50624 provision.go:172] copyRemoteCerts
	I1207 21:15:39.968098   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:15:39.968121   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.970937   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971356   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.971386   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971627   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.971847   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.972010   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.972148   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.060156   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:15:40.082673   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 21:15:40.104263   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:15:40.125974   50624 provision.go:86] duration metric: configureAuth took 500.211549ms
	I1207 21:15:40.126012   50624 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:15:40.126233   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:15:40.126317   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.129108   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129484   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.129505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129662   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.129884   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130039   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130197   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.130358   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.130677   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.130698   50624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:15:40.439407   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:15:40.439438   50624 machine.go:91] provisioned docker machine in 1.071729841s
	I1207 21:15:40.439451   50624 start.go:300] post-start starting for "embed-certs-598346" (driver="kvm2")
	I1207 21:15:40.439465   50624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:15:40.439504   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.439827   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:15:40.439860   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.442750   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443135   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.443160   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443400   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.443623   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.443811   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.443974   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.531350   50624 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:15:40.535614   50624 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:15:40.535644   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:15:40.535720   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:15:40.535813   50624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:15:40.535938   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:15:40.543981   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:40.566714   50624 start.go:303] post-start completed in 127.248268ms
	I1207 21:15:40.566739   50624 fix.go:56] fixHost completed within 19.496310567s
	I1207 21:15:40.566763   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.569439   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569774   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.569791   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569915   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.570085   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570257   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570386   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.570534   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.570842   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.570855   50624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:15:40.686455   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983740.637211698
	
	I1207 21:15:40.686479   50624 fix.go:206] guest clock: 1701983740.637211698
	I1207 21:15:40.686486   50624 fix.go:219] Guest: 2023-12-07 21:15:40.637211698 +0000 UTC Remote: 2023-12-07 21:15:40.566742665 +0000 UTC m=+244.381466877 (delta=70.469033ms)
	I1207 21:15:40.686503   50624 fix.go:190] guest clock delta is within tolerance: 70.469033ms
	I1207 21:15:40.686508   50624 start.go:83] releasing machines lock for "embed-certs-598346", held for 19.61610992s
	I1207 21:15:40.686533   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.686809   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:40.689665   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690046   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.690069   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690242   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690903   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690988   50624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:15:40.691035   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.691162   50624 ssh_runner.go:195] Run: cat /version.json
	I1207 21:15:40.691196   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.693712   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.693943   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694078   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694106   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694269   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694295   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694333   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694419   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694501   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694580   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694742   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.694816   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694925   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.801618   50624 ssh_runner.go:195] Run: systemctl --version
	I1207 21:15:40.807496   50624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:15:40.967288   50624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:15:40.974223   50624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:15:40.974315   50624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:15:40.988391   50624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:15:40.988418   50624 start.go:475] detecting cgroup driver to use...
	I1207 21:15:40.988510   50624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:15:41.002379   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:15:41.016074   50624 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:15:41.016125   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:15:41.031096   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:15:41.044808   50624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:15:41.150630   50624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:15:40.710656   51037 main.go:141] libmachine: (no-preload-950431) Calling .Start
	I1207 21:15:40.710832   51037 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:15:40.711509   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:15:40.711813   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:15:40.712201   51037 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:15:40.712860   51037 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:15:41.269009   50624 docker.go:219] disabling docker service ...
	I1207 21:15:41.269067   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:15:41.281800   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:15:41.293694   50624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:15:41.413774   50624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:15:41.523960   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:15:41.536474   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:15:41.553611   50624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:15:41.553668   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.562741   50624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:15:41.562831   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.571841   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.580887   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.590259   50624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:15:41.599349   50624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:15:41.607259   50624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:15:41.607314   50624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:15:41.619425   50624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:15:41.627826   50624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:15:41.736815   50624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:15:41.896418   50624 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:15:41.896505   50624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:15:41.901539   50624 start.go:543] Will wait 60s for crictl version
	I1207 21:15:41.901598   50624 ssh_runner.go:195] Run: which crictl
	I1207 21:15:41.905454   50624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:15:41.942196   50624 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:15:41.942267   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:41.986024   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:42.034806   50624 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:15:42.036352   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:42.039304   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039704   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:42.039745   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039930   50624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1207 21:15:42.043951   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:42.056473   50624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:15:42.056535   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:42.099359   50624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:15:42.099459   50624 ssh_runner.go:195] Run: which lz4
	I1207 21:15:42.103324   50624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:15:42.107440   50624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:15:42.107476   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:15:44.063941   50624 crio.go:444] Took 1.960653 seconds to copy over tarball
	I1207 21:15:44.064018   50624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:15:41.955586   51037 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:15:41.956530   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:41.956967   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:41.957004   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:41.956919   51634 retry.go:31] will retry after 266.143384ms: waiting for machine to come up
	I1207 21:15:42.224547   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.225112   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.225142   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.225060   51634 retry.go:31] will retry after 314.364486ms: waiting for machine to come up
	I1207 21:15:42.540722   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.541264   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.541294   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.541225   51634 retry.go:31] will retry after 447.845741ms: waiting for machine to come up
	I1207 21:15:42.990858   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.991283   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.991310   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.991246   51634 retry.go:31] will retry after 494.509595ms: waiting for machine to come up
	I1207 21:15:43.487745   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:43.488268   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:43.488305   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:43.488218   51634 retry.go:31] will retry after 517.471464ms: waiting for machine to come up
	I1207 21:15:44.007846   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.008291   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.008322   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.008247   51634 retry.go:31] will retry after 755.53339ms: waiting for machine to come up
	I1207 21:15:44.765367   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.765799   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.765827   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.765743   51634 retry.go:31] will retry after 947.674862ms: waiting for machine to come up
	I1207 21:15:45.715436   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:45.715859   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:45.715890   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:45.715811   51634 retry.go:31] will retry after 1.304063218s: waiting for machine to come up
	I1207 21:15:47.049597   50624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.985550761s)
	I1207 21:15:47.049622   50624 crio.go:451] Took 2.985655 seconds to extract the tarball
	I1207 21:15:47.049632   50624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:15:47.089358   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:47.145982   50624 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:15:47.146007   50624 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:15:47.146069   50624 ssh_runner.go:195] Run: crio config
	I1207 21:15:47.205864   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:15:47.205888   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:15:47.205904   50624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:15:47.205933   50624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-598346 NodeName:embed-certs-598346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:15:47.206106   50624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-598346"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:15:47.206189   50624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-598346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:15:47.206249   50624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:15:47.214998   50624 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:15:47.215065   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:15:47.223252   50624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 21:15:47.239698   50624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:15:47.258476   50624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1207 21:15:47.275957   50624 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1207 21:15:47.279689   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:47.295204   50624 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346 for IP: 192.168.72.180
	I1207 21:15:47.295234   50624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:15:47.295391   50624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:15:47.295436   50624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:15:47.295501   50624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/client.key
	I1207 21:15:47.295552   50624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key.379caec1
	I1207 21:15:47.295589   50624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key
	I1207 21:15:47.295686   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:15:47.295712   50624 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:15:47.295722   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:15:47.295748   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:15:47.295772   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:15:47.295795   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:15:47.295835   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:47.296438   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:15:47.324057   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:15:47.350921   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:15:47.378603   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:15:47.405443   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:15:47.429942   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:15:47.455437   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:15:47.478735   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:15:47.503326   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:15:47.525886   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:15:47.549414   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:15:47.572018   50624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:15:47.590990   50624 ssh_runner.go:195] Run: openssl version
	I1207 21:15:47.597874   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:15:47.610087   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615875   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615949   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.622941   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:15:47.632217   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:15:47.641323   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645877   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645955   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.651452   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:15:47.660848   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:15:47.670225   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674620   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674670   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.680118   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:15:47.689444   50624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:15:47.693775   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:15:47.699741   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:15:47.705442   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:15:47.710938   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:15:47.716367   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:15:47.721958   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:15:47.727403   50624 kubeadm.go:404] StartCluster: {Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:15:47.727520   50624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:15:47.727599   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:47.771682   50624 cri.go:89] found id: ""
	I1207 21:15:47.771763   50624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:15:47.782923   50624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:15:47.782946   50624 kubeadm.go:636] restartCluster start
	I1207 21:15:47.783020   50624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:15:47.791494   50624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.792645   50624 kubeconfig.go:92] found "embed-certs-598346" server: "https://192.168.72.180:8443"
	I1207 21:15:47.794953   50624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:15:47.804014   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.804096   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.815412   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.815433   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.815503   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.825646   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.326356   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.326438   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.338771   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.826334   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.826405   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.837498   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.325998   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.326084   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.338197   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.825701   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.825821   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.842649   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.326181   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.326277   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.341560   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.826087   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.826183   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.841186   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.021061   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:47.021495   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:47.021519   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:47.021459   51634 retry.go:31] will retry after 1.183999845s: waiting for machine to come up
	I1207 21:15:48.206768   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:48.207222   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:48.207250   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:48.207183   51634 retry.go:31] will retry after 1.595211966s: waiting for machine to come up
	I1207 21:15:49.804832   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:49.805298   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:49.805328   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:49.805229   51634 retry.go:31] will retry after 2.126345359s: waiting for machine to come up
	I1207 21:15:51.325994   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.326083   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.338573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.826180   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.826253   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.837573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.326115   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.326192   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.336984   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.826590   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.826681   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.837678   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.326205   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.326279   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.337579   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.826047   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.826145   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.840263   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.325765   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.325842   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.337452   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.825969   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.826063   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.837428   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.325968   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.326060   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.337128   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.826749   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.826832   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.838002   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.933915   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:51.934338   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:51.934372   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:51.934279   51634 retry.go:31] will retry after 2.448139802s: waiting for machine to come up
	I1207 21:15:54.384038   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:54.384399   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:54.384425   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:54.384351   51634 retry.go:31] will retry after 3.211975182s: waiting for machine to come up
	I1207 21:15:56.325893   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.326007   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.337698   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:56.825827   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.825964   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.836945   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.326560   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:57.326637   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:57.337299   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.804902   50624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:15:57.804933   50624 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:15:57.804946   50624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:15:57.805023   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:57.846788   50624 cri.go:89] found id: ""
	I1207 21:15:57.846877   50624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:15:57.861513   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:15:57.869730   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:15:57.869781   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877777   50624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877801   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:57.992244   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:58.878385   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.051985   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.136414   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.232261   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:15:59.232358   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.246262   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.760617   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.260132   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.760723   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:57.599056   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:57.599417   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:57.599444   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:57.599382   51634 retry.go:31] will retry after 5.532381184s: waiting for machine to come up
	I1207 21:16:04.442905   51113 start.go:369] acquired machines lock for "default-k8s-diff-port-275828" in 3m9.513966804s
	I1207 21:16:04.442972   51113 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:04.442985   51113 fix.go:54] fixHost starting: 
	I1207 21:16:04.443390   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:04.443434   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:04.460087   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1207 21:16:04.460495   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:04.460991   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:04.461014   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:04.461405   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:04.461582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:04.461705   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:04.463304   51113 fix.go:102] recreateIfNeeded on default-k8s-diff-port-275828: state=Stopped err=<nil>
	I1207 21:16:04.463337   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	W1207 21:16:04.463494   51113 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:04.465895   51113 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-275828" ...
	I1207 21:16:04.467328   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Start
	I1207 21:16:04.467485   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring networks are active...
	I1207 21:16:04.468206   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network default is active
	I1207 21:16:04.468581   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network mk-default-k8s-diff-port-275828 is active
	I1207 21:16:04.468943   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Getting domain xml...
	I1207 21:16:04.469483   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Creating domain...
	I1207 21:16:03.134233   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134762   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134794   51037 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:16:03.134811   51037 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:16:03.135186   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.135209   51037 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:16:03.135230   51037 main.go:141] libmachine: (no-preload-950431) DBG | skip adding static IP to network mk-no-preload-950431 - found existing host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"}
	I1207 21:16:03.135251   51037 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:16:03.135265   51037 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:16:03.137331   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137662   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.137689   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137792   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:16:03.137817   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:16:03.137854   51037 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:03.137871   51037 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:16:03.137890   51037 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:16:03.229593   51037 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:03.230019   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:16:03.230604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.233069   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233426   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.233462   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233661   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:16:03.233837   51037 machine.go:88] provisioning docker machine ...
	I1207 21:16:03.233855   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:03.234081   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234254   51037 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:16:03.234277   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234386   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.236593   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.236859   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.236892   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.237079   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.237243   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237396   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237522   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.237653   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.238000   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.238016   51037 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:16:03.374959   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:16:03.374999   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.377825   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378212   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.378247   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378389   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.378604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378763   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.379041   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.379363   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.379399   51037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:03.510050   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:03.510081   51037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:03.510109   51037 buildroot.go:174] setting up certificates
	I1207 21:16:03.510119   51037 provision.go:83] configureAuth start
	I1207 21:16:03.510130   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.510367   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.512754   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513120   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.513151   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513289   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.515546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.515894   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.515947   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.516066   51037 provision.go:138] copyHostCerts
	I1207 21:16:03.516119   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:03.516138   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:03.516206   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:03.516294   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:03.516303   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:03.516328   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:03.516398   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:03.516406   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:03.516430   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:03.516480   51037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:16:03.662663   51037 provision.go:172] copyRemoteCerts
	I1207 21:16:03.662732   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:03.662756   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.665043   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665344   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.665379   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.665713   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.665887   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.666049   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:03.757956   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:03.782348   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:16:03.806388   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:16:03.831058   51037 provision.go:86] duration metric: configureAuth took 320.927373ms
	I1207 21:16:03.831086   51037 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:03.831264   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:16:03.831365   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.834104   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834489   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.834535   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834703   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.834901   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835087   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835224   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.835370   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.835699   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.835721   51037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:04.154758   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:04.154783   51037 machine.go:91] provisioned docker machine in 920.933844ms
	I1207 21:16:04.154795   51037 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:16:04.154810   51037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:04.154829   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.155148   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:04.155173   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.157776   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158131   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.158163   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158336   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.158560   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.158733   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.158873   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.258325   51037 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:04.262930   51037 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:04.262950   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:04.263011   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:04.263077   51037 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:04.263177   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:04.271602   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:04.303816   51037 start.go:303] post-start completed in 148.990598ms
	I1207 21:16:04.303849   51037 fix.go:56] fixHost completed within 23.617201529s
	I1207 21:16:04.303873   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.306576   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.306930   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.306962   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.307104   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.307326   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307458   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307591   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.307773   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:04.308242   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:04.308260   51037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:04.442724   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983764.388433819
	
	I1207 21:16:04.442748   51037 fix.go:206] guest clock: 1701983764.388433819
	I1207 21:16:04.442757   51037 fix.go:219] Guest: 2023-12-07 21:16:04.388433819 +0000 UTC Remote: 2023-12-07 21:16:04.303852803 +0000 UTC m=+192.597462932 (delta=84.581016ms)
	I1207 21:16:04.442797   51037 fix.go:190] guest clock delta is within tolerance: 84.581016ms
	I1207 21:16:04.442801   51037 start.go:83] releasing machines lock for "no-preload-950431", held for 23.756181397s
	I1207 21:16:04.442827   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.443065   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:04.446137   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446578   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.446612   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446797   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447413   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447656   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447732   51037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:04.447783   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.447902   51037 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:04.447923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.450882   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451025   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451253   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451280   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451470   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451481   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451507   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451654   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.451720   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452043   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.452098   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.452561   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452761   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.565982   51037 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:04.573821   51037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:04.741571   51037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:04.749951   51037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:04.750038   51037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:04.770148   51037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:04.770176   51037 start.go:475] detecting cgroup driver to use...
	I1207 21:16:04.770244   51037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:04.787798   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:04.802346   51037 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:04.802415   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:04.819638   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:04.836910   51037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:04.947330   51037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:05.087698   51037 docker.go:219] disabling docker service ...
	I1207 21:16:05.087794   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:05.104790   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:05.122187   51037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:05.252225   51037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:05.394598   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:05.408596   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:05.429804   51037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:05.429876   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.441617   51037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:05.441700   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.452787   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.462684   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.472827   51037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:05.485493   51037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:05.495282   51037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:05.495367   51037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:05.512972   51037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:05.523817   51037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:05.674940   51037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:05.866827   51037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:05.866913   51037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:05.873044   51037 start.go:543] Will wait 60s for crictl version
	I1207 21:16:05.873109   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:05.878484   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:05.919888   51037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:05.919979   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:05.976795   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:06.034745   51037 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:16:01.260865   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.760580   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.790951   50624 api_server.go:72] duration metric: took 2.55868777s to wait for apiserver process to appear ...
	I1207 21:16:01.790981   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:01.791000   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.338427   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:05.338467   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:05.338483   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.436356   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.436385   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:05.937143   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.943626   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.943656   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.036269   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:06.039546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.039919   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:06.039968   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.040205   51037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:06.044899   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:06.061053   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:16:06.061106   51037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:06.099113   51037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:16:06.099136   51037 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:06.099196   51037 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.099225   51037 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.099246   51037 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:16:06.099283   51037 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.099314   51037 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.099229   51037 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.099419   51037 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.099484   51037 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100960   51037 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.100961   51037 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.101035   51037 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.100973   51037 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.234869   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.272014   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.275605   51037 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:16:06.275659   51037 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.275716   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.295068   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.329385   51037 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:16:06.329435   51037 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.329449   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.329486   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.356701   51037 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:16:06.356744   51037 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.356790   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.382536   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:16:06.389671   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.391917   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.399801   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.399908   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.399980   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.400067   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.409081   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.616824   51037 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:16:06.616864   51037 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:16:06.616876   51037 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.616884   51037 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.616923   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.616930   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.617038   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617075   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1207 21:16:06.617086   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617114   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617122   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617199   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:16:06.617272   51037 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:16:06.617286   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:06.617305   51037 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.617353   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.631975   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.632094   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1207 21:16:06.632181   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.436900   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.457077   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:06.457122   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.936534   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.943658   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:16:06.952206   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:06.952239   50624 api_server.go:131] duration metric: took 5.161250619s to wait for apiserver health ...
	I1207 21:16:06.952251   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:16:06.952259   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:06.954179   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:05.844251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting to get IP...
	I1207 21:16:05.845419   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845793   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845896   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:05.845790   51802 retry.go:31] will retry after 224.053393ms: waiting for machine to come up
	I1207 21:16:06.071071   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071521   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071545   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.071464   51802 retry.go:31] will retry after 272.776477ms: waiting for machine to come up
	I1207 21:16:06.346126   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346739   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346773   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.346683   51802 retry.go:31] will retry after 373.022784ms: waiting for machine to come up
	I1207 21:16:06.721567   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722089   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722115   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.722029   51802 retry.go:31] will retry after 380.100559ms: waiting for machine to come up
	I1207 21:16:07.103408   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103853   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103884   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.103798   51802 retry.go:31] will retry after 473.24776ms: waiting for machine to come up
	I1207 21:16:07.578548   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579087   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.579176   51802 retry.go:31] will retry after 892.826082ms: waiting for machine to come up
	I1207 21:16:08.473531   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474027   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474058   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:08.473989   51802 retry.go:31] will retry after 1.042648737s: waiting for machine to come up
	I1207 21:16:09.518823   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519363   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:09.519213   51802 retry.go:31] will retry after 948.481622ms: waiting for machine to come up
	I1207 21:16:06.955727   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:06.967724   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:06.990163   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:07.001387   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:07.001425   50624 system_pods.go:61] "coredns-5dd5756b68-hlpsb" [c1f9f7db-0741-483c-9e39-d6f0ce4715d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:07.001436   50624 system_pods.go:61] "etcd-embed-certs-598346" [acda3700-87a2-4442-94e6-1d17288e7cee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:07.001446   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [e1439056-061b-4add-a399-c55a816fba70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:07.001456   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [b4c80c36-da2c-4c46-b655-3c6bb2a96ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:07.001466   50624 system_pods.go:61] "kube-proxy-jqhnn" [e2635205-e67a-4b56-a7b4-82fe97b5fe7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:07.001490   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [3b90e1d4-9c0f-46e4-a7b7-5e42717a8b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:07.001499   50624 system_pods.go:61] "metrics-server-57f55c9bc5-sndh4" [9a052ce0-760f-4cfd-a958-971daa14ea02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:07.001511   50624 system_pods.go:61] "storage-provisioner" [bf244954-a1d7-4b51-9085-387e60d02792] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:07.001524   50624 system_pods.go:74] duration metric: took 11.336763ms to wait for pod list to return data ...
	I1207 21:16:07.001538   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:07.007697   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:07.007737   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:07.007752   50624 node_conditions.go:105] duration metric: took 6.207447ms to run NodePressure ...
	I1207 21:16:07.007770   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:07.287760   50624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297260   50624 kubeadm.go:787] kubelet initialised
	I1207 21:16:07.297285   50624 kubeadm.go:788] duration metric: took 9.495153ms waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297296   50624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:07.304800   50624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.313488   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313523   50624 pod_ready.go:81] duration metric: took 8.689063ms waiting for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.313535   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313545   50624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.321603   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321637   50624 pod_ready.go:81] duration metric: took 8.078752ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.321649   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321658   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.333040   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333068   50624 pod_ready.go:81] duration metric: took 11.399287ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.333081   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333089   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.397606   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397632   50624 pod_ready.go:81] duration metric: took 64.53373ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.397642   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397648   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713161   50624 pod_ready.go:92] pod "kube-proxy-jqhnn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:08.713188   50624 pod_ready.go:81] duration metric: took 1.315530906s waiting for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713201   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:10.919896   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:07.059825   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061030   51037 ssh_runner.go:235] Completed: which crictl: (3.443650725s)
	I1207 21:16:10.061121   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:10.061130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (3.443992158s)
	I1207 21:16:10.061160   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1207 21:16:10.061174   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.444033736s)
	I1207 21:16:10.061199   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:16:10.061225   51037 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061245   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1: (3.429236441s)
	I1207 21:16:10.061286   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061294   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061296   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.429094571s)
	I1207 21:16:10.061330   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:16:10.061346   51037 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.001491955s)
	I1207 21:16:10.061361   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061387   51037 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:16:10.061402   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:10.061430   51037 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061469   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:10.469685   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470224   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:10.470187   51802 retry.go:31] will retry after 1.846436384s: waiting for machine to come up
	I1207 21:16:12.319116   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319558   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319590   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:12.319512   51802 retry.go:31] will retry after 1.415005437s: waiting for machine to come up
	I1207 21:16:13.736082   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736599   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736630   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:13.736533   51802 retry.go:31] will retry after 2.499952402s: waiting for machine to come up
	I1207 21:16:13.413966   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:15.414181   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:14.287122   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.225788884s)
	I1207 21:16:14.287166   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1207 21:16:14.287165   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: (4.226018563s)
	I1207 21:16:14.287190   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.287204   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.225706156s)
	I1207 21:16:14.287208   51037 ssh_runner.go:235] Completed: which crictl: (4.225716226s)
	I1207 21:16:14.287294   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287310   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (4.225934747s)
	I1207 21:16:14.287322   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1207 21:16:14.287325   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:14.287270   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1207 21:16:14.287238   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.338957   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:16:14.339087   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:16.589704   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.302291312s)
	I1207 21:16:16.589740   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:16:16.589764   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589777   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.302463063s)
	I1207 21:16:16.589816   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589817   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1207 21:16:16.589887   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.250737859s)
	I1207 21:16:16.589912   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1207 21:16:16.238979   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239340   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239367   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:16.239304   51802 retry.go:31] will retry after 2.478988074s: waiting for machine to come up
	I1207 21:16:18.720359   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720892   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720925   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:18.720840   51802 retry.go:31] will retry after 4.119588433s: waiting for machine to come up
	I1207 21:16:17.913477   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.407386   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:18.407417   50624 pod_ready.go:81] duration metric: took 9.694207323s waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:18.407431   50624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:20.429952   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.142546   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (1.552699587s)
	I1207 21:16:18.142620   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:16:18.142658   51037 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:18.142737   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:20.432330   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.289556402s)
	I1207 21:16:20.432358   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:16:20.432386   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:20.432436   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:22.843120   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843516   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843540   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:22.843470   51802 retry.go:31] will retry after 3.969701228s: waiting for machine to come up
	I1207 21:16:22.431295   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:24.929166   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:22.891954   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.459495307s)
	I1207 21:16:22.891978   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1207 21:16:22.892001   51037 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:22.892056   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:23.742939   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 21:16:23.743011   51037 cache_images.go:123] Successfully loaded all cached images
	I1207 21:16:23.743021   51037 cache_images.go:92] LoadImages completed in 17.643875393s
	I1207 21:16:23.743107   51037 ssh_runner.go:195] Run: crio config
	I1207 21:16:23.802064   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:23.802087   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:23.802106   51037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:23.802128   51037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950431 NodeName:no-preload-950431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:23.802258   51037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:23.802329   51037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-950431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:23.802382   51037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:16:23.813052   51037 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:23.813143   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:23.823249   51037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1207 21:16:23.840999   51037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:16:23.857599   51037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1207 21:16:23.873664   51037 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:23.877208   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:23.888109   51037 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431 for IP: 192.168.50.100
	I1207 21:16:23.888148   51037 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:23.888298   51037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:23.888333   51037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:23.888394   51037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:16:23.888453   51037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key.8f36cd02
	I1207 21:16:23.888490   51037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key
	I1207 21:16:23.888598   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:23.888626   51037 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:23.888638   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:23.888669   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:23.888701   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:23.888725   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:23.888769   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:23.889405   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:23.911313   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:16:23.935796   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:23.960576   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:23.983952   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:24.005755   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:24.027232   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:24.049398   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:24.073975   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:24.097326   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:24.118396   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:24.140590   51037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:24.157287   51037 ssh_runner.go:195] Run: openssl version
	I1207 21:16:24.163079   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:24.173618   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.177973   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.178038   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.183537   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:24.193750   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:24.203836   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208278   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208324   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.213906   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:24.223939   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:24.234037   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238379   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238443   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.243650   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:24.253904   51037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:24.258343   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:24.264011   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:24.269609   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:24.275294   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:24.280969   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:24.286763   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:24.292414   51037 kubeadm.go:404] StartCluster: {Name:no-preload-950431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:24.292505   51037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:24.292565   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:24.342426   51037 cri.go:89] found id: ""
	I1207 21:16:24.342596   51037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:24.353900   51037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:24.353939   51037 kubeadm.go:636] restartCluster start
	I1207 21:16:24.353999   51037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:24.363465   51037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.364722   51037 kubeconfig.go:92] found "no-preload-950431" server: "https://192.168.50.100:8443"
	I1207 21:16:24.367198   51037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:24.378918   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.378971   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.391331   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.391354   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.391393   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.403003   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.903722   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.903814   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.915891   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.403459   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.403568   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.415677   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.903683   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.903765   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.915474   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:26.403146   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.403258   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.414072   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.031043   50270 start.go:369] acquired machines lock for "old-k8s-version-483745" in 1m1.958159244s
	I1207 21:16:28.031117   50270 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:28.031127   50270 fix.go:54] fixHost starting: 
	I1207 21:16:28.031477   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:28.031504   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:28.047757   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1207 21:16:28.048134   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:28.048598   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:16:28.048628   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:28.048962   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:28.049123   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:28.049278   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:16:28.050698   50270 fix.go:102] recreateIfNeeded on old-k8s-version-483745: state=Stopped err=<nil>
	I1207 21:16:28.050716   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	W1207 21:16:28.050943   50270 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:28.053462   50270 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-483745" ...
	I1207 21:16:28.054995   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Start
	I1207 21:16:28.055169   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring networks are active...
	I1207 21:16:28.055803   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network default is active
	I1207 21:16:28.056167   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network mk-old-k8s-version-483745 is active
	I1207 21:16:28.056613   50270 main.go:141] libmachine: (old-k8s-version-483745) Getting domain xml...
	I1207 21:16:28.057267   50270 main.go:141] libmachine: (old-k8s-version-483745) Creating domain...
	I1207 21:16:26.815724   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816306   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Found IP for machine: 192.168.39.254
	I1207 21:16:26.816346   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserving static IP address...
	I1207 21:16:26.816373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has current primary IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816843   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.816874   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserved static IP address: 192.168.39.254
	I1207 21:16:26.816895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | skip adding static IP to network mk-default-k8s-diff-port-275828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"}
	I1207 21:16:26.816916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Getting to WaitForSSH function...
	I1207 21:16:26.816933   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for SSH to be available...
	I1207 21:16:26.819265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819625   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.819654   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819808   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH client type: external
	I1207 21:16:26.819840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa (-rw-------)
	I1207 21:16:26.819880   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:26.819908   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | About to run SSH command:
	I1207 21:16:26.819930   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | exit 0
	I1207 21:16:26.913932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:26.914232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetConfigRaw
	I1207 21:16:26.915043   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:26.917486   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.917899   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.917944   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.918182   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:16:26.918360   51113 machine.go:88] provisioning docker machine ...
	I1207 21:16:26.918380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:26.918587   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918775   51113 buildroot.go:166] provisioning hostname "default-k8s-diff-port-275828"
	I1207 21:16:26.918805   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918971   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:26.921227   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921482   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.921515   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921657   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:26.921818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922006   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922162   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:26.922317   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:26.922695   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:26.922713   51113 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-275828 && echo "default-k8s-diff-port-275828" | sudo tee /etc/hostname
	I1207 21:16:27.066745   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-275828
	
	I1207 21:16:27.066778   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.069493   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.069842   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.069895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.070078   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.070295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070446   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.070824   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.071271   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.071302   51113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-275828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-275828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-275828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:27.206475   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:27.206503   51113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:27.206534   51113 buildroot.go:174] setting up certificates
	I1207 21:16:27.206545   51113 provision.go:83] configureAuth start
	I1207 21:16:27.206553   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:27.206818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:27.209295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209632   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.209666   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209763   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.211882   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212147   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.212176   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212250   51113 provision.go:138] copyHostCerts
	I1207 21:16:27.212306   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:27.212326   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:27.212396   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:27.212501   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:27.212511   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:27.212540   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:27.212617   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:27.212627   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:27.212656   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:27.212728   51113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-275828 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube default-k8s-diff-port-275828]
	I1207 21:16:27.273212   51113 provision.go:172] copyRemoteCerts
	I1207 21:16:27.273291   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:27.273321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.275905   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276185   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.276219   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.276569   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.276703   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.276814   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.371834   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:27.394096   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 21:16:27.416619   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:27.443103   51113 provision.go:86] duration metric: configureAuth took 236.548224ms
	I1207 21:16:27.443127   51113 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:27.443336   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:27.443406   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.446005   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446303   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.446334   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446477   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.446648   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446959   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.447158   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.447600   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.447623   51113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:27.760539   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:27.760582   51113 machine.go:91] provisioned docker machine in 842.207987ms
	I1207 21:16:27.760608   51113 start.go:300] post-start starting for "default-k8s-diff-port-275828" (driver="kvm2")
	I1207 21:16:27.760617   51113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:27.760633   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:27.760993   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:27.761030   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.763527   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.763923   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.763968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.764077   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.764254   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.764386   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.764559   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.860772   51113 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:27.865258   51113 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:27.865285   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:27.865348   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:27.865422   51113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:27.865537   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:27.874901   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:27.896890   51113 start.go:303] post-start completed in 136.257327ms
	I1207 21:16:27.896912   51113 fix.go:56] fixHost completed within 23.453929111s
	I1207 21:16:27.896932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.899422   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899740   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.899780   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.900104   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900400   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.900601   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.900920   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.900935   51113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:28.030917   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983787.976128099
	
	I1207 21:16:28.030936   51113 fix.go:206] guest clock: 1701983787.976128099
	I1207 21:16:28.030943   51113 fix.go:219] Guest: 2023-12-07 21:16:27.976128099 +0000 UTC Remote: 2023-12-07 21:16:27.896915587 +0000 UTC m=+213.119643923 (delta=79.212512ms)
	I1207 21:16:28.030970   51113 fix.go:190] guest clock delta is within tolerance: 79.212512ms
	I1207 21:16:28.030975   51113 start.go:83] releasing machines lock for "default-k8s-diff-port-275828", held for 23.588040931s
	I1207 21:16:28.031003   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.031255   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:28.033864   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034277   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.034318   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034501   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035101   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035283   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035354   51113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:28.035399   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.035519   51113 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:28.035543   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.038353   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038570   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038636   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.038675   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.038993   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039013   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.039035   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.039152   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039189   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.039319   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.039368   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039495   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039619   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.161850   51113 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:28.167540   51113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:28.311477   51113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:28.319102   51113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:28.319177   51113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:28.334118   51113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:28.334138   51113 start.go:475] detecting cgroup driver to use...
	I1207 21:16:28.334187   51113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:28.351563   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:28.364950   51113 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:28.365015   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:28.380367   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:28.396070   51113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:28.504230   51113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:28.634829   51113 docker.go:219] disabling docker service ...
	I1207 21:16:28.634893   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:28.648955   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:28.660615   51113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:28.781577   51113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:28.899307   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:28.912673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:28.931310   51113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:28.931384   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.941006   51113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:28.941083   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.951712   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.963062   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.973981   51113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:28.984828   51113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:28.993884   51113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:28.993992   51113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:29.007812   51113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:29.017781   51113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:29.147958   51113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:29.329720   51113 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:29.329781   51113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:29.336048   51113 start.go:543] Will wait 60s for crictl version
	I1207 21:16:29.336109   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:16:29.340075   51113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:29.378207   51113 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:29.378289   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.438034   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.487899   51113 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:16:29.489336   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:29.492387   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.492824   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:29.492858   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.493105   51113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:29.497882   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:29.510857   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:16:29.510910   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:29.557513   51113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:16:29.557590   51113 ssh_runner.go:195] Run: which lz4
	I1207 21:16:29.561849   51113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:29.566351   51113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:29.566383   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:16:26.930512   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:29.442726   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:26.903645   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.903716   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.915728   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.403874   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.403939   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.415501   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.904082   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.904150   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.916404   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.404050   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.404143   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.416757   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.903144   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.903202   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.914709   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.403236   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.403324   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.415595   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.903823   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.903908   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.920093   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.403786   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.403864   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.417374   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.903246   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.903335   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.916333   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:31.403909   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.403984   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.418792   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.352362   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting to get IP...
	I1207 21:16:29.353395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.353871   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.353965   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.353847   51971 retry.go:31] will retry after 307.502031ms: waiting for machine to come up
	I1207 21:16:29.663412   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.663958   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.663990   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.663889   51971 retry.go:31] will retry after 328.013518ms: waiting for machine to come up
	I1207 21:16:29.993550   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.994129   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.994160   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.994066   51971 retry.go:31] will retry after 315.323859ms: waiting for machine to come up
	I1207 21:16:30.310570   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.311106   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.311139   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.311055   51971 retry.go:31] will retry after 547.317149ms: waiting for machine to come up
	I1207 21:16:30.859753   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.860500   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.860532   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.860479   51971 retry.go:31] will retry after 591.81737ms: waiting for machine to come up
	I1207 21:16:31.453939   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:31.454481   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:31.454508   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:31.454426   51971 retry.go:31] will retry after 818.736684ms: waiting for machine to come up
	I1207 21:16:32.274582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:32.275065   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:32.275100   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:32.275018   51971 retry.go:31] will retry after 865.865666ms: waiting for machine to come up
	I1207 21:16:33.142356   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:33.142713   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:33.142748   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:33.142655   51971 retry.go:31] will retry after 1.270743306s: waiting for machine to come up
	I1207 21:16:31.473652   51113 crio.go:444] Took 1.911834 seconds to copy over tarball
	I1207 21:16:31.473729   51113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:34.448164   51113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974406678s)
	I1207 21:16:34.448185   51113 crio.go:451] Took 2.974507 seconds to extract the tarball
	I1207 21:16:34.448196   51113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:34.493579   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:34.555669   51113 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:16:34.555694   51113 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:16:34.555760   51113 ssh_runner.go:195] Run: crio config
	I1207 21:16:34.637813   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:34.637855   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:34.637874   51113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:34.637909   51113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-275828 NodeName:default-k8s-diff-port-275828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:34.638088   51113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-275828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:34.638186   51113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-275828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1207 21:16:34.638255   51113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:16:34.651147   51113 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:34.651264   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:34.660855   51113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1207 21:16:34.678841   51113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:34.696338   51113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1207 21:16:34.718058   51113 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:34.722640   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:34.737097   51113 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828 for IP: 192.168.39.254
	I1207 21:16:34.737138   51113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:34.737316   51113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:34.737367   51113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:34.737459   51113 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.key
	I1207 21:16:34.737557   51113 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key.9e1cae77
	I1207 21:16:34.737614   51113 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key
	I1207 21:16:34.737745   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:34.737783   51113 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:34.737799   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:34.737835   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:34.737870   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:34.737904   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:34.737976   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:34.738542   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:34.768389   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:34.801112   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:31.931027   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:34.430620   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:31.903642   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.903781   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.919330   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.403857   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.403949   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.419078   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.903477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.903561   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.918946   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.403477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.403605   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.416411   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.903561   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.903690   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.915554   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:34.379314   51037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:34.379347   51037 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:34.379361   51037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:34.379450   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:34.427182   51037 cri.go:89] found id: ""
	I1207 21:16:34.427255   51037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:34.448141   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:34.462411   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:34.462494   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474410   51037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474442   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:34.646144   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.548212   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.745964   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.818060   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.899490   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:35.899616   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:35.916336   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:36.432466   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:34.415333   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:34.415908   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:34.415935   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:34.415819   51971 retry.go:31] will retry after 1.846003214s: waiting for machine to come up
	I1207 21:16:36.262900   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:36.263321   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:36.263343   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:36.263283   51971 retry.go:31] will retry after 1.858599877s: waiting for machine to come up
	I1207 21:16:38.124144   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:38.124669   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:38.124701   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:38.124622   51971 retry.go:31] will retry after 2.443451278s: waiting for machine to come up
	I1207 21:16:34.830966   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:35.094040   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:35.121234   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:35.148659   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:35.176938   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:35.206320   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:35.234907   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:35.261034   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:35.286500   51113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:35.306742   51113 ssh_runner.go:195] Run: openssl version
	I1207 21:16:35.314676   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:35.325752   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332066   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332147   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.339606   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:35.350274   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:35.360328   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365516   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365593   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.371482   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:35.381328   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:35.391869   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.396986   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.397051   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.402939   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:35.413428   51113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:35.419598   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:35.427748   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:35.435492   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:35.442272   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:35.450180   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:35.459639   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:35.467615   51113 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:35.467736   51113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:35.467793   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:35.504593   51113 cri.go:89] found id: ""
	I1207 21:16:35.504685   51113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:35.514155   51113 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:35.514182   51113 kubeadm.go:636] restartCluster start
	I1207 21:16:35.514255   51113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:35.525515   51113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.526798   51113 kubeconfig.go:92] found "default-k8s-diff-port-275828" server: "https://192.168.39.254:8444"
	I1207 21:16:35.529447   51113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:35.540876   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.540934   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.555494   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.555519   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.555569   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.569455   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.069801   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.069903   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.083366   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.569984   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.570078   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.585387   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.069869   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.069980   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.086900   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.570490   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.570597   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.586215   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.069601   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.069709   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.084557   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.570194   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.570306   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.586686   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.070433   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.070518   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.088460   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.570579   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.570654   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.588478   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.785543   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:38.932981   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:36.932228   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.432719   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.932863   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.432661   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.932210   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.965380   51037 api_server.go:72] duration metric: took 3.065893789s to wait for apiserver process to appear ...
	I1207 21:16:38.965409   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:38.965425   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:40.571221   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:40.571824   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:40.571873   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:40.571774   51971 retry.go:31] will retry after 2.349695925s: waiting for machine to come up
	I1207 21:16:42.923107   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:42.923582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:42.923618   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:42.923549   51971 retry.go:31] will retry after 4.503894046s: waiting for machine to come up
	I1207 21:16:40.070126   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.070229   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.085086   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:40.570237   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.570329   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.584997   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.069554   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.069706   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.084654   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.570175   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.570260   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.581973   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.070546   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.070641   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.085859   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.570428   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.570534   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.585491   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.070017   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.070132   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.082461   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.569992   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.570093   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.585221   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.069681   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.069749   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.081499   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.569999   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.570083   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.585512   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.598644   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.598675   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:43.598689   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:43.649508   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.649553   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:44.150221   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.155890   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.155914   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:44.649610   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.655402   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.655437   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:45.150082   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:45.156432   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:16:45.172948   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:16:45.172983   51037 api_server.go:131] duration metric: took 6.207566234s to wait for apiserver health ...
	I1207 21:16:45.172996   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:45.173002   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:45.175018   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:41.430106   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:43.431417   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.932644   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.176436   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:45.231836   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:45.250256   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:45.270151   51037 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:45.270188   51037 system_pods.go:61] "coredns-76f75df574-qfwbr" [577161a0-8d68-41cc-88cd-1bd56e99b7aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:45.270198   51037 system_pods.go:61] "etcd-no-preload-950431" [8e49a6a7-c1e5-469d-9b30-c8e59471effb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:45.270210   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [15bc33db-995d-4102-9a2b-e991209c2946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:45.270220   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [c263b58e-2aea-455d-8b2f-8915f1c6e820] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:45.270232   51037 system_pods.go:61] "kube-proxy-mzv22" [96e51e2f-17be-4724-ae28-99dfa63e9976] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:45.270241   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [c040d573-c78f-4149-8be6-af33fc6ea186] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:45.270257   51037 system_pods.go:61] "metrics-server-57f55c9bc5-fv8x4" [ac03a70e-1059-474f-b6f6-5974f0900bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:45.270268   51037 system_pods.go:61] "storage-provisioner" [3f942481-221c-4e69-a876-f82676cde788] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:45.270279   51037 system_pods.go:74] duration metric: took 19.99813ms to wait for pod list to return data ...
	I1207 21:16:45.270291   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:45.274636   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:45.274667   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:45.274681   51037 node_conditions.go:105] duration metric: took 4.381452ms to run NodePressure ...
	I1207 21:16:45.274700   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.597857   51037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603394   51037 kubeadm.go:787] kubelet initialised
	I1207 21:16:45.603423   51037 kubeadm.go:788] duration metric: took 5.535827ms waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603432   51037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:45.612509   51037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:47.430850   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431364   50270 main.go:141] libmachine: (old-k8s-version-483745) Found IP for machine: 192.168.61.171
	I1207 21:16:47.431389   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserving static IP address...
	I1207 21:16:47.431415   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has current primary IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431791   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.431827   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | skip adding static IP to network mk-old-k8s-version-483745 - found existing host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"}
	I1207 21:16:47.431845   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserved static IP address: 192.168.61.171
	I1207 21:16:47.431866   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting for SSH to be available...
	I1207 21:16:47.431884   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Getting to WaitForSSH function...
	I1207 21:16:47.434071   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434391   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.434423   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434511   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH client type: external
	I1207 21:16:47.434548   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa (-rw-------)
	I1207 21:16:47.434590   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:47.434624   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | About to run SSH command:
	I1207 21:16:47.434642   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | exit 0
	I1207 21:16:47.529747   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:47.530150   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetConfigRaw
	I1207 21:16:47.530743   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.533361   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.533690   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.533728   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.534019   50270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:16:47.534201   50270 machine.go:88] provisioning docker machine ...
	I1207 21:16:47.534219   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:47.534379   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534549   50270 buildroot.go:166] provisioning hostname "old-k8s-version-483745"
	I1207 21:16:47.534578   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534793   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.537037   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537448   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.537482   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537621   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.537788   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.537963   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.538107   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.538276   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.538728   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.538751   50270 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-483745 && echo "old-k8s-version-483745" | sudo tee /etc/hostname
	I1207 21:16:47.694514   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-483745
	
	I1207 21:16:47.694552   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.697720   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698181   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.698217   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.698602   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698958   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.699158   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.699617   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.699646   50270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-483745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-483745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-483745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:47.851750   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:47.851781   50270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:47.851817   50270 buildroot.go:174] setting up certificates
	I1207 21:16:47.851830   50270 provision.go:83] configureAuth start
	I1207 21:16:47.851848   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.852181   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.855229   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855607   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.855633   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855891   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.858432   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.858811   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.858868   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.859066   50270 provision.go:138] copyHostCerts
	I1207 21:16:47.859126   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:47.859146   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:47.859211   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:47.859312   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:47.859322   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:47.859352   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:47.859426   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:47.859436   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:47.859465   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:47.859532   50270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-483745 san=[192.168.61.171 192.168.61.171 localhost 127.0.0.1 minikube old-k8s-version-483745]
	I1207 21:16:48.080700   50270 provision.go:172] copyRemoteCerts
	I1207 21:16:48.080764   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:48.080787   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.083799   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084261   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.084325   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084545   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.084752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.084874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.085025   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.188586   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:48.217051   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:16:48.245046   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:48.276344   50270 provision.go:86] duration metric: configureAuth took 424.496766ms
	I1207 21:16:48.276381   50270 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:48.276627   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:16:48.276720   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.280119   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280556   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.280627   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280943   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.281127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281312   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281452   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.281621   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.282136   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.282160   50270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:45.070516   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:45.070618   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:45.087880   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:45.541593   51113 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:45.541627   51113 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:45.541640   51113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:45.541714   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:45.589291   51113 cri.go:89] found id: ""
	I1207 21:16:45.589394   51113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:45.606397   51113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:45.616135   51113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:45.616192   51113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625661   51113 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625689   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.750072   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.619750   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.838835   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.935494   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:47.007474   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:47.007536   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.020817   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.536948   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.036982   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.537584   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.036899   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.537400   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.575582   51113 api_server.go:72] duration metric: took 2.568102787s to wait for apiserver process to appear ...
	I1207 21:16:49.575614   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:49.575636   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576140   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:49.576174   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576630   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:48.639642   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:48.639702   50270 machine.go:91] provisioned docker machine in 1.10547448s
	I1207 21:16:48.639715   50270 start.go:300] post-start starting for "old-k8s-version-483745" (driver="kvm2")
	I1207 21:16:48.639733   50270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:48.639772   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.640106   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:48.640136   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.643155   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643592   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.643625   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643897   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.644101   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.644253   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.644374   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.756527   50270 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:48.761976   50270 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:48.762042   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:48.762117   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:48.762229   50270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:48.762355   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:48.773495   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:48.802433   50270 start.go:303] post-start completed in 162.696963ms
	I1207 21:16:48.802464   50270 fix.go:56] fixHost completed within 20.771337135s
	I1207 21:16:48.802489   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.805389   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.805821   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.805853   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.806002   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.806221   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806361   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806516   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.806737   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.807177   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.807194   50270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:48.948515   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983808.895290650
	
	I1207 21:16:48.948602   50270 fix.go:206] guest clock: 1701983808.895290650
	I1207 21:16:48.948622   50270 fix.go:219] Guest: 2023-12-07 21:16:48.89529065 +0000 UTC Remote: 2023-12-07 21:16:48.802469186 +0000 UTC m=+365.320601213 (delta=92.821464ms)
	I1207 21:16:48.948679   50270 fix.go:190] guest clock delta is within tolerance: 92.821464ms
	I1207 21:16:48.948694   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 20.917606045s
	I1207 21:16:48.948726   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.948967   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:48.952007   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952392   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.952424   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952680   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953302   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953494   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953578   50270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:48.953633   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.953877   50270 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:48.953904   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.957083   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957288   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957631   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957656   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957798   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957849   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958105   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958110   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958284   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958443   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958665   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.958668   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:49.082678   50270 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:49.091075   50270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:49.250638   50270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:49.259237   50270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:49.259312   50270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:49.279490   50270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:49.279520   50270 start.go:475] detecting cgroup driver to use...
	I1207 21:16:49.279592   50270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:49.301129   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:49.317758   50270 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:49.317832   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:49.335384   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:49.352808   50270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:49.487177   50270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:49.622551   50270 docker.go:219] disabling docker service ...
	I1207 21:16:49.622632   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:49.641913   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:49.655046   50270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:49.780471   50270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:49.903816   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:49.917447   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:49.939101   50270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:16:49.939170   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.949112   50270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:49.949187   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.958706   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.968115   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.977516   50270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:49.987974   50270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:49.996996   50270 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:49.997069   50270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:50.009736   50270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:50.018888   50270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:50.136461   50270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:50.337931   50270 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:50.338013   50270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:50.344175   50270 start.go:543] Will wait 60s for crictl version
	I1207 21:16:50.344237   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:50.348418   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:50.387227   50270 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:50.387329   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.439820   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.492743   50270 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1207 21:16:48.431193   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.945823   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:47.635909   51037 pod_ready.go:102] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:49.635091   51037 pod_ready.go:92] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:49.635119   51037 pod_ready.go:81] duration metric: took 4.022584638s waiting for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:49.635139   51037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:51.656178   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.494290   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:50.496890   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497226   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:50.497257   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497557   50270 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:50.501988   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:50.516192   50270 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 21:16:50.516266   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:50.564641   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:50.564723   50270 ssh_runner.go:195] Run: which lz4
	I1207 21:16:50.569306   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:50.573458   50270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:50.573483   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1207 21:16:52.405191   50270 crio.go:444] Took 1.835925 seconds to copy over tarball
	I1207 21:16:52.405260   50270 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:50.077304   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.602961   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.602994   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:54.603007   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.660014   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.660053   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:55.077712   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.102038   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.102068   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:55.577664   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.586714   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.586753   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:56.077361   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:56.084665   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:16:56.096164   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:56.096196   51113 api_server.go:131] duration metric: took 6.520574302s to wait for apiserver health ...
	I1207 21:16:56.096209   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:56.096219   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:53.431611   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.954091   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:53.656773   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.659213   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:56.811148   51113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:55.499497   50270 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094207903s)
	I1207 21:16:55.499524   50270 crio.go:451] Took 3.094311 seconds to extract the tarball
	I1207 21:16:55.499532   50270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:55.539952   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:55.612029   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:55.612059   50270 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:55.612164   50270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:16:55.612282   50270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.612335   50270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.612433   50270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.612564   50270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.612575   50270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614472   50270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.614507   50270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.614513   50270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.614571   50270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.744531   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1207 21:16:55.744539   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.747157   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.748014   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.754498   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.778012   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.781417   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.886272   50270 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1207 21:16:55.886318   50270 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1207 21:16:55.886371   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.949015   50270 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1207 21:16:55.949128   50270 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.949205   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.963217   50270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1207 21:16:55.963332   50270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.963422   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.966733   50270 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1207 21:16:55.966854   50270 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.966934   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.004614   50270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1207 21:16:56.004668   50270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.004721   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.015557   50270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1207 21:16:56.015655   50270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.015714   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017603   50270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1207 21:16:56.017643   50270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.017686   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017817   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1207 21:16:56.017913   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:56.018011   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:56.018087   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1207 21:16:56.018160   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.028183   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.030370   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.222552   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:16:56.222625   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1207 21:16:56.222673   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1207 21:16:56.222680   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.222731   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1207 21:16:56.222828   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1207 21:16:56.222911   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1207 21:16:56.236367   50270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1207 21:16:56.236387   50270 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236440   50270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236444   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1207 21:16:56.455526   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:58.094353   50270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.638791166s)
	I1207 21:16:58.094525   50270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.858047565s)
	I1207 21:16:58.094552   50270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1207 21:16:58.094591   50270 cache_images.go:92] LoadImages completed in 2.482516651s
	W1207 21:16:58.094650   50270 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1207 21:16:58.094729   50270 ssh_runner.go:195] Run: crio config
	I1207 21:16:58.191059   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:16:58.191083   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:58.191108   50270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:58.191132   50270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.171 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-483745 NodeName:old-k8s-version-483745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 21:16:58.191279   50270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-483745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-483745
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.171:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:58.191389   50270 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-483745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:58.191462   50270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1207 21:16:58.204882   50270 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:58.204948   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:58.217370   50270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1207 21:16:58.237205   50270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:58.256539   50270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1207 21:16:58.276428   50270 ssh_runner.go:195] Run: grep 192.168.61.171	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:58.281568   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:58.295073   50270 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745 for IP: 192.168.61.171
	I1207 21:16:58.295112   50270 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:58.295295   50270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:58.295368   50270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:58.295493   50270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.key
	I1207 21:16:58.295589   50270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key.13a54c20
	I1207 21:16:58.295658   50270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key
	I1207 21:16:58.295817   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:58.295861   50270 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:58.295887   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:58.295922   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:58.295972   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:58.296012   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:58.296067   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:58.296936   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:58.327708   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:58.354646   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:58.379025   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:16:58.404362   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:58.433648   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:58.459739   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:58.487457   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:58.516507   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:57.214999   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:57.244196   51113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:57.264778   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:57.978177   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:57.978214   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:57.978224   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:57.978232   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:57.978241   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:57.978248   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:57.978254   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:57.978261   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:57.978267   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:57.978276   51113 system_pods.go:74] duration metric: took 713.475246ms to wait for pod list to return data ...
	I1207 21:16:57.978285   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:57.983354   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:57.983379   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:57.983389   51113 node_conditions.go:105] duration metric: took 5.099916ms to run NodePressure ...
	I1207 21:16:57.983403   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:58.583287   51113 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590472   51113 kubeadm.go:787] kubelet initialised
	I1207 21:16:58.590500   51113 kubeadm.go:788] duration metric: took 7.176115ms waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590509   51113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:58.597622   51113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.609459   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609491   51113 pod_ready.go:81] duration metric: took 11.841558ms waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.609503   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609513   51113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.620143   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620172   51113 pod_ready.go:81] duration metric: took 10.647465ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.620185   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620193   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.633821   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633850   51113 pod_ready.go:81] duration metric: took 13.645914ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.633864   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633872   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.647333   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647359   51113 pod_ready.go:81] duration metric: took 13.477348ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.647373   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647385   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.988420   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988448   51113 pod_ready.go:81] duration metric: took 341.054838ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.988457   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988465   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.388053   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388080   51113 pod_ready.go:81] duration metric: took 399.605098ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.388090   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388097   51113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.787887   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787913   51113 pod_ready.go:81] duration metric: took 399.809388ms waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.787925   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787932   51113 pod_ready.go:38] duration metric: took 1.197413161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:59.787945   51113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:16:59.801806   51113 ops.go:34] apiserver oom_adj: -16
	I1207 21:16:59.801828   51113 kubeadm.go:640] restartCluster took 24.28763849s
	I1207 21:16:59.801837   51113 kubeadm.go:406] StartCluster complete in 24.334230687s
	I1207 21:16:59.801855   51113 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.801945   51113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:16:59.804179   51113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.804458   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:16:59.804515   51113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:16:59.804612   51113 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804638   51113 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.804646   51113 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:16:59.804695   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:59.804714   51113 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804727   51113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-275828"
	I1207 21:16:59.804704   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805119   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805150   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805168   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805180   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805204   51113 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.805226   51113 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.805235   51113 addons.go:240] addon metrics-server should already be in state true
	I1207 21:16:59.805277   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805627   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805663   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.811657   51113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-275828" context rescaled to 1 replicas
	I1207 21:16:59.811696   51113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:16:59.814005   51113 out.go:177] * Verifying Kubernetes components...
	I1207 21:16:59.815636   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:16:59.822134   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1207 21:16:59.822558   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.822636   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I1207 21:16:59.822718   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1207 21:16:59.823063   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823104   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823126   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823128   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823479   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823605   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823619   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823943   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823970   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.824050   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824102   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.824193   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.824463   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824502   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.828241   51113 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.828264   51113 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:16:59.828292   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.828676   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.830577   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.841996   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1207 21:16:59.842283   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I1207 21:16:59.842697   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.842888   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.843254   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843277   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843391   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843416   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843638   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843779   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843831   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.843973   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.845644   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.845852   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.847586   51113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:59.847253   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1207 21:16:59.849062   51113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:16:57.998272   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:00.429603   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.850487   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:16:59.850500   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:16:59.850514   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849121   51113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:16:59.850564   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:16:59.850583   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849452   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.851054   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.851071   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.851664   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.852274   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.852315   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.854738   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855190   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.855204   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855394   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.855556   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.855649   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.855724   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.856210   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.856596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856720   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.856846   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.857188   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.857324   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.871856   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1207 21:16:59.872193   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.872726   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.872744   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.873088   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.873243   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.874542   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.874803   51113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:16:59.874821   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:16:59.874840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.877142   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877524   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.877547   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877753   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.877889   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.878024   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.878137   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.983279   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:17:00.040397   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:17:00.056981   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:17:00.057008   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:17:00.078195   51113 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 21:17:00.078235   51113 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:00.117369   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:17:00.117399   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:17:00.177756   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:00.177783   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:17:00.220667   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:01.338599   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.298167461s)
	I1207 21:17:01.338648   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338662   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338747   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.355434262s)
	I1207 21:17:01.338789   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338802   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338925   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.338945   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.338960   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338969   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340360   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340381   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340357   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340368   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340472   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340490   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.340504   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340785   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340788   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340804   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347722   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.347741   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.347933   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.347950   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434021   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213311264s)
	I1207 21:17:01.434084   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434099   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434391   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434413   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434410   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434423   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434434   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434627   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434637   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434648   51113 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-275828"
	I1207 21:17:01.436476   51113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:16:57.997177   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.154238   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.154261   51037 pod_ready.go:81] duration metric: took 9.519115953s waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.154270   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159402   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.159421   51037 pod_ready.go:81] duration metric: took 5.143876ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159431   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164107   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.164124   51037 pod_ready.go:81] duration metric: took 4.684573ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164134   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168711   51037 pod_ready.go:92] pod "kube-proxy-mzv22" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.168727   51037 pod_ready.go:81] duration metric: took 4.587318ms waiting for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168734   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201648   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.201676   51037 pod_ready.go:81] duration metric: took 32.935891ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201688   51037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:01.509707   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:58.544765   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:58.571376   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:58.597700   50270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:58.616720   50270 ssh_runner.go:195] Run: openssl version
	I1207 21:16:58.622830   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:58.634656   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640469   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640526   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.646624   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:58.660113   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:58.670742   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675735   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675782   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.682821   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:58.696760   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:58.710547   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.716983   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.717048   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.724400   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:58.736496   50270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:58.742587   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:58.750398   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:58.757537   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:58.764361   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:58.771280   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:58.778697   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:58.785873   50270 kubeadm.go:404] StartCluster: {Name:old-k8s-version-483745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:58.786022   50270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:58.786079   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:58.834174   50270 cri.go:89] found id: ""
	I1207 21:16:58.834262   50270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:58.845932   50270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:58.845958   50270 kubeadm.go:636] restartCluster start
	I1207 21:16:58.846025   50270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:58.855982   50270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.857458   50270 kubeconfig.go:92] found "old-k8s-version-483745" server: "https://192.168.61.171:8443"
	I1207 21:16:58.860840   50270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:58.870183   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.870235   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.881631   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.881647   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.881693   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.892422   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.393094   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.393163   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.405578   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.893104   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.893160   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.906998   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.393560   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.393629   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.405837   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.893376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.893472   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.905785   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.393118   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.393204   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.405693   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.893214   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.893348   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.906272   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.392588   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.392682   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.404717   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.893325   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.893425   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.906705   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:03.392549   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.392627   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.406493   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.437892   51113 addons.go:502] enable addons completed in 1.633389199s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:17:02.198851   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:04.199518   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:02.931262   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.431344   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.509733   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.511779   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.892711   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.892814   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.905553   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.393144   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.393236   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.406280   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.893375   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.893459   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.905715   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.393376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.393473   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.405757   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.892719   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.892800   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.906258   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.392706   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.392787   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.405913   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.893392   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.893475   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.908660   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.392944   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.393037   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.408113   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.892488   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.892602   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.905157   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:08.393126   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:08.393209   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:08.405227   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.197790   51113 node_ready.go:49] node "default-k8s-diff-port-275828" has status "Ready":"True"
	I1207 21:17:05.197814   51113 node_ready.go:38] duration metric: took 5.119553512s waiting for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:05.197825   51113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:17:05.204644   51113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:07.225887   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.229380   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:07.928733   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.929797   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.009114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:10.012079   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.870396   50270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:17:08.870427   50270 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:17:08.870439   50270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:17:08.870496   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:17:08.914337   50270 cri.go:89] found id: ""
	I1207 21:17:08.914412   50270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:17:08.932406   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:17:08.941877   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:17:08.942012   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952016   50270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952038   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.086175   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.811331   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.044161   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.117851   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.218309   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:17:10.218376   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.231007   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.754756   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.255150   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.755138   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.782482   50270 api_server.go:72] duration metric: took 1.564169408s to wait for apiserver process to appear ...
	I1207 21:17:11.782510   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:17:11.782543   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:11.729870   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.727588   51113 pod_ready.go:92] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.727621   51113 pod_ready.go:81] duration metric: took 7.52294973s waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.727635   51113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733893   51113 pod_ready.go:92] pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.733936   51113 pod_ready.go:81] duration metric: took 6.276731ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733951   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739431   51113 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.739456   51113 pod_ready.go:81] duration metric: took 5.495838ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739467   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745435   51113 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.745456   51113 pod_ready.go:81] duration metric: took 5.98053ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745468   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751301   51113 pod_ready.go:92] pod "kube-proxy-nmx2z" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.751323   51113 pod_ready.go:81] duration metric: took 5.845741ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751333   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122896   51113 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:13.122923   51113 pod_ready.go:81] duration metric: took 371.582675ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122936   51113 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:11.931676   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.433505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.510180   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.511615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.519216   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.783319   50270 api_server.go:269] stopped: https://192.168.61.171:8443/healthz: Get "https://192.168.61.171:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 21:17:16.783432   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.468175   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:17:17.468210   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:17:17.968919   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.975181   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:17.975206   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.469287   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.476311   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:18.476340   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.968605   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.974285   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:17:18.981956   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:17:18.981983   50270 api_server.go:131] duration metric: took 7.199466057s to wait for apiserver health ...
	I1207 21:17:18.981994   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:17:18.982000   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:17:18.983962   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:17:15.433488   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:17.434321   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.931755   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.430606   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.010615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.512114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:18.985481   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:17:18.994841   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:17:19.015418   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:17:19.029654   50270 system_pods.go:59] 7 kube-system pods found
	I1207 21:17:19.029685   50270 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:17:19.029692   50270 system_pods.go:61] "etcd-old-k8s-version-483745" [4a920248-1b35-4834-9e6f-a0e7567b5bb8] Running
	I1207 21:17:19.029699   50270 system_pods.go:61] "kube-apiserver-old-k8s-version-483745" [aaba6fb9-56a1-497d-a398-5c685f5500dd] Running
	I1207 21:17:19.029706   50270 system_pods.go:61] "kube-controller-manager-old-k8s-version-483745" [a13bda00-a0f4-4f59-8b52-65589579efcf] Running
	I1207 21:17:19.029711   50270 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:17:19.029715   50270 system_pods.go:61] "kube-scheduler-old-k8s-version-483745" [4fc3e12a-e294-457e-912f-0ed765ad4def] Running
	I1207 21:17:19.029718   50270 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:17:19.029726   50270 system_pods.go:74] duration metric: took 14.290629ms to wait for pod list to return data ...
	I1207 21:17:19.029739   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:17:19.033868   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:17:19.033897   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:17:19.033911   50270 node_conditions.go:105] duration metric: took 4.166175ms to run NodePressure ...
	I1207 21:17:19.033945   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:19.284413   50270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:17:19.288373   50270 retry.go:31] will retry after 182.556746ms: kubelet not initialised
	I1207 21:17:19.479987   50270 retry.go:31] will retry after 253.110045ms: kubelet not initialised
	I1207 21:17:19.744586   50270 retry.go:31] will retry after 608.133785ms: kubelet not initialised
	I1207 21:17:20.357758   50270 retry.go:31] will retry after 829.182382ms: kubelet not initialised
	I1207 21:17:21.192621   50270 retry.go:31] will retry after 998.365497ms: kubelet not initialised
	I1207 21:17:22.196882   50270 retry.go:31] will retry after 1.144379185s: kubelet not initialised
	I1207 21:17:23.346660   50270 retry.go:31] will retry after 4.175853771s: kubelet not initialised
	I1207 21:17:19.937119   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:22.433221   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.430858   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:23.929526   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:25.932244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:24.011486   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.509908   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.529200   50270 retry.go:31] will retry after 6.099259697s: kubelet not initialised
	I1207 21:17:24.932035   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.932432   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:28.935455   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.933244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:30.431008   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:29.009917   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.509259   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.432441   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.933226   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:32.431713   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:34.931903   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.510686   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:35.511611   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.635018   50270 retry.go:31] will retry after 3.426713545s: kubelet not initialised
	I1207 21:17:37.067021   50270 retry.go:31] will retry after 7.020738309s: kubelet not initialised
	I1207 21:17:35.933872   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.432200   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:37.432208   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:39.432443   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.008964   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.013143   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.434554   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.935808   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:41.931614   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.431445   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.510798   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:45.010221   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.093245   50270 retry.go:31] will retry after 15.092242293s: kubelet not initialised
	I1207 21:17:45.433353   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.933249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:46.931078   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.430564   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.510355   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:50.010022   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.935001   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.433167   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:51.430664   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:53.431310   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.431508   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.010127   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:54.937299   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.432126   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.929516   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.929800   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.511723   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:00.010732   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.190582   50270 retry.go:31] will retry after 18.708242221s: kubelet not initialised
	I1207 21:17:59.932898   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.435773   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.429487   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.931336   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.011470   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.508873   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:06.510378   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.932311   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.434111   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.431033   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.931058   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.009614   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.009942   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.932527   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.933100   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.432890   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:12.429420   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.431778   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:13.010085   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:15.509812   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:17.907480   50270 kubeadm.go:787] kubelet initialised
	I1207 21:18:17.907516   50270 kubeadm.go:788] duration metric: took 58.6230723s waiting for restarted kubelet to initialise ...
	I1207 21:18:17.907523   50270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:18:17.912349   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917692   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.917710   50270 pod_ready.go:81] duration metric: took 5.339125ms waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917718   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923173   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.923192   50270 pod_ready.go:81] duration metric: took 5.469466ms waiting for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923200   50270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928824   50270 pod_ready.go:92] pod "etcd-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.928846   50270 pod_ready.go:81] duration metric: took 5.638159ms waiting for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928856   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.934993   50270 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.935014   50270 pod_ready.go:81] duration metric: took 6.149728ms waiting for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.935025   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311907   50270 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.311934   50270 pod_ready.go:81] duration metric: took 376.900024ms waiting for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311947   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:16.931768   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:16.930954   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932194   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.009341   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.010383   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.709795   50270 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.709818   50270 pod_ready.go:81] duration metric: took 397.865434ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.709828   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107018   50270 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:19.107046   50270 pod_ready.go:81] duration metric: took 397.21085ms waiting for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107074   50270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:21.413113   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.414993   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.937780   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.432192   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:21.429764   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.430826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.930929   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:22.510894   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.009872   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.914333   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.914486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.432249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.432529   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.930973   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.430718   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.510016   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.009983   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.415400   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.912237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:29.932694   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.433150   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.432680   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.931118   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.010572   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.508896   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.509628   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.913374   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.914250   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.933409   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.432655   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.432740   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.430165   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.930630   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.009629   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.009658   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:38.914325   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:40.915158   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.413980   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.932574   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.432525   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:42.431330   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.929635   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.009978   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.010954   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.414082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.415225   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:46.932342   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:48.932460   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.429890   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.931948   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.508820   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.508885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.510909   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.916969   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.414590   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.431888   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:53.432497   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.429836   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.429987   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.010442   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.520121   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.415187   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.914505   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:55.433372   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:57.437496   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.932937   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.430774   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.010885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.510473   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.413820   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.413911   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.414163   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.932159   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.932344   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:04.432873   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.430926   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.930199   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.930253   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.511496   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.512541   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.913832   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.915554   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:06.433629   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.933148   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.931760   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.431655   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.009852   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.010279   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.415114   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.913846   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:11.433166   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:13.933572   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.930147   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.935480   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.010617   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.510815   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:15.414959   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.913372   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:16.433375   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:18.932915   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.436017   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.933613   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.008855   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.010583   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.510650   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.913760   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.913931   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.434113   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.932185   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:22.429942   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.432486   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.009731   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.513595   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.913964   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:25.915033   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.415173   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.433721   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.932763   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.934197   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.432795   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.008998   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.011163   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:30.912991   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:32.914672   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.432802   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.932626   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.930505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.510138   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.010166   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:34.915019   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:37.414169   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:35.933595   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.432419   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.433061   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.929697   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.930753   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.509265   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.509898   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:39.414719   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:41.914208   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.932356   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.932643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.430519   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.930095   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.510763   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:44.511006   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.914874   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:46.414739   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.431904   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.930507   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.930634   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.009537   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.009825   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.010633   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:48.914101   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.413288   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:50.433022   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:52.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.930920   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:54.433488   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.508693   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.509440   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.913446   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.914532   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.416064   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.432116   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:57.935271   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:56.929900   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.931501   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.009318   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.510190   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.915025   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.414806   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.432326   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:02.432758   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:04.434643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:01.431826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.931648   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.010188   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.010498   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.914269   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.914640   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:06.931909   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.431136   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.932438   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.509186   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:09.511791   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.415605   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.918130   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.934599   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.434477   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.430502   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.434943   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.008903   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:14.010390   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:16.509062   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.415237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.914465   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.435338   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.933559   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.931293   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:18.408309   50624 pod_ready.go:81] duration metric: took 4m0.000858815s waiting for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:18.408355   50624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:18.408376   50624 pod_ready.go:38] duration metric: took 4m11.111070516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:18.408405   50624 kubeadm.go:640] restartCluster took 4m30.625453328s
	W1207 21:20:18.408479   50624 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:18.408513   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:18.510036   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:20.510485   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.915160   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:21.915544   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.940064   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:22.432481   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:24.432791   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.010158   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:25.509777   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.915685   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.414017   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.415525   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.435601   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.932153   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.009824   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.509369   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.372266   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.96372485s)
	I1207 21:20:32.372349   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:32.386002   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:20:32.395757   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:20:32.406709   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:20:32.406761   50624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:20:32.465707   50624 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:20:32.465842   50624 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:20:32.636031   50624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:20:32.636171   50624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:20:32.636296   50624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:20:32.892368   50624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:20:32.894341   50624 out.go:204]   - Generating certificates and keys ...
	I1207 21:20:32.894484   50624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:20:32.894581   50624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:20:32.894717   50624 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:20:32.894799   50624 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:20:32.895289   50624 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:20:32.895583   50624 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:20:32.896112   50624 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:20:32.896577   50624 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:20:32.897032   50624 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:20:32.897567   50624 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:20:32.897804   50624 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:20:32.897886   50624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:20:32.942322   50624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:20:33.084899   50624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:20:33.286309   50624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:20:33.482188   50624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:20:33.483077   50624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:20:33.487928   50624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:20:30.912937   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.914703   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.934926   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.431849   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.489853   50624 out.go:204]   - Booting up control plane ...
	I1207 21:20:33.490021   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:20:33.490177   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:20:33.490458   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:20:33.509319   50624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:20:33.509448   50624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:20:33.509501   50624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:20:33.654452   50624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:20:32.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.510930   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.918486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.414467   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:35.432767   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.931132   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.009506   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.011200   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.509897   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.657033   50624 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003082 seconds
	I1207 21:20:41.657193   50624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:20:41.673142   50624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:20:42.218438   50624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:20:42.218706   50624 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-598346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:20:42.745090   50624 kubeadm.go:322] [bootstrap-token] Using token: 74zooz.4uhmxlwojs4pjw69
	I1207 21:20:42.746934   50624 out.go:204]   - Configuring RBAC rules ...
	I1207 21:20:42.747111   50624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:20:42.762521   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:20:42.776210   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:20:42.781152   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:20:42.786698   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:20:42.795815   50624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:20:42.811407   50624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:20:43.073430   50624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:20:43.167611   50624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:20:43.168880   50624 kubeadm.go:322] 
	I1207 21:20:43.168970   50624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:20:43.169014   50624 kubeadm.go:322] 
	I1207 21:20:43.169111   50624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:20:43.169132   50624 kubeadm.go:322] 
	I1207 21:20:43.169163   50624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:20:43.169239   50624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:20:43.169314   50624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:20:43.169322   50624 kubeadm.go:322] 
	I1207 21:20:43.169394   50624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:20:43.169402   50624 kubeadm.go:322] 
	I1207 21:20:43.169475   50624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:20:43.169500   50624 kubeadm.go:322] 
	I1207 21:20:43.169591   50624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:20:43.169701   50624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:20:43.169799   50624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:20:43.169811   50624 kubeadm.go:322] 
	I1207 21:20:43.169930   50624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:20:43.170066   50624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:20:43.170078   50624 kubeadm.go:322] 
	I1207 21:20:43.170177   50624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170303   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:20:43.170332   50624 kubeadm.go:322] 	--control-plane 
	I1207 21:20:43.170338   50624 kubeadm.go:322] 
	I1207 21:20:43.170463   50624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:20:43.170474   50624 kubeadm.go:322] 
	I1207 21:20:43.170590   50624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170717   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:20:43.171438   50624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:20:43.171461   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:20:43.171467   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:20:43.173556   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:20:39.415520   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.416257   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.933233   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.933860   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:44.432482   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.175267   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:20:43.199404   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:20:43.237091   50624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:20:43.237150   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.237203   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=embed-certs-598346 minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.303369   50624 ops.go:34] apiserver oom_adj: -16
	I1207 21:20:43.670500   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.788364   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.394973   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.894494   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.394695   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.895141   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.509949   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.511007   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.915384   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.916082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:47.916757   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.432649   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:48.434738   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.394706   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:46.894743   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.395117   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.894780   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.395408   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.895349   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.394860   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.894472   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.395102   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.895157   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.512284   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.011848   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.413787   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.913793   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.933240   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.935428   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:51.394691   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:51.895193   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.395131   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.894787   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.394652   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.895139   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.395160   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.895153   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.394410   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.584599   50624 kubeadm.go:1088] duration metric: took 12.347498848s to wait for elevateKubeSystemPrivileges.
	I1207 21:20:55.584628   50624 kubeadm.go:406] StartCluster complete in 5m7.857234007s
	I1207 21:20:55.584645   50624 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.584733   50624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:20:55.587311   50624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.587607   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:20:55.587630   50624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:20:55.587708   50624 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-598346"
	I1207 21:20:55.587716   50624 addons.go:69] Setting default-storageclass=true in profile "embed-certs-598346"
	I1207 21:20:55.587728   50624 addons.go:69] Setting metrics-server=true in profile "embed-certs-598346"
	I1207 21:20:55.587739   50624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-598346"
	I1207 21:20:55.587760   50624 addons.go:231] Setting addon metrics-server=true in "embed-certs-598346"
	W1207 21:20:55.587769   50624 addons.go:240] addon metrics-server should already be in state true
	I1207 21:20:55.587826   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587736   50624 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-598346"
	W1207 21:20:55.587852   50624 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:20:55.587901   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587824   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:20:55.588192   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588202   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588223   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588224   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588284   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588308   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.605717   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I1207 21:20:55.605750   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1207 21:20:55.605726   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1207 21:20:55.606254   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606305   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606338   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606778   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606803   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606823   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606844   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606826   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606904   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.607178   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607218   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607274   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607420   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.607776   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607816   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.607818   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607849   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.610610   50624 addons.go:231] Setting addon default-storageclass=true in "embed-certs-598346"
	W1207 21:20:55.610628   50624 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:20:55.610647   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.610902   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.610927   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.624530   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I1207 21:20:55.624997   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.625474   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.625492   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.625833   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.626016   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.626236   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1207 21:20:55.626715   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627093   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1207 21:20:55.627538   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627700   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.627709   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628044   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.628061   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628109   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.628112   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.629910   50624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:20:55.628721   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.628756   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.631270   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.631338   50624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.631357   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:20:55.631371   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.631724   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.634618   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.636632   50624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:20:55.635162   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.635740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.638311   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:20:55.638331   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:20:55.638354   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.638318   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.638427   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.638930   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.639110   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.639264   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.642987   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643401   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.643432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643605   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.643794   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.643947   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.644065   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.649214   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I1207 21:20:55.649604   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.650085   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.650106   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.650583   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.650740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.657356   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.657691   50624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.657708   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:20:55.657727   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.659345   50624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-598346" context rescaled to 1 replicas
	I1207 21:20:55.659381   50624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:20:55.660949   50624 out.go:177] * Verifying Kubernetes components...
	I1207 21:20:55.662172   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:55.661748   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662288   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.662323   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662617   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.662821   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.662992   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.663175   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.825166   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.850131   50624 node_ready.go:35] waiting up to 6m0s for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.850203   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:20:55.850365   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:20:55.850378   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:20:55.879031   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.896010   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:20:55.896034   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:20:55.910575   50624 node_ready.go:49] node "embed-certs-598346" has status "Ready":"True"
	I1207 21:20:55.910603   50624 node_ready.go:38] duration metric: took 60.438039ms waiting for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.910615   50624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:55.976847   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:55.976874   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:20:55.981345   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:56.068591   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:52.509374   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:55.012033   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:54.915300   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.414020   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.761169   50624 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761195   50624 pod_ready.go:81] duration metric: took 1.779826027s waiting for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:57.761205   50624 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761212   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.813172   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962919124s)
	I1207 21:20:58.813238   50624 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:20:58.813195   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.934130104s)
	I1207 21:20:58.813281   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813299   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813520   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.988311627s)
	I1207 21:20:58.813560   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813572   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813757   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.813776   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.813787   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813796   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813831   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814093   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814097   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814110   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814132   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.814152   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.814511   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814531   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.839304   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.839329   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.839611   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.839653   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.839663   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.859922   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.791233211s)
	I1207 21:20:58.859979   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.859998   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860412   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860469   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860483   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.860495   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860430   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.860749   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860768   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860778   50624 addons.go:467] Verifying addon metrics-server=true in "embed-certs-598346"
	I1207 21:20:58.863874   50624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:20:55.431955   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.434174   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:58.865423   50624 addons.go:502] enable addons completed in 3.277791662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:20:58.894841   50624 pod_ready.go:92] pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.894877   50624 pod_ready.go:81] duration metric: took 1.133651819s waiting for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.894891   50624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.906981   50624 pod_ready.go:92] pod "etcd-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.907009   50624 pod_ready.go:81] duration metric: took 12.109561ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.907020   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918176   50624 pod_ready.go:92] pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.918198   50624 pod_ready.go:81] duration metric: took 11.169952ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918211   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928763   50624 pod_ready.go:92] pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.928791   50624 pod_ready.go:81] duration metric: took 10.570922ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928804   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163618   50624 pod_ready.go:92] pod "kube-proxy-h4pmv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.163652   50624 pod_ready.go:81] duration metric: took 1.234839709s waiting for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163664   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455887   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.455909   50624 pod_ready.go:81] duration metric: took 292.236645ms waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455917   50624 pod_ready.go:38] duration metric: took 4.545291617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:00.455932   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:00.455974   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:00.474126   50624 api_server.go:72] duration metric: took 4.814712718s to wait for apiserver process to appear ...
	I1207 21:21:00.474151   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:00.474170   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:21:00.480909   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:21:00.482468   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:00.482491   50624 api_server.go:131] duration metric: took 8.332499ms to wait for apiserver health ...
	I1207 21:21:00.482500   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:00.658932   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:00.658965   50624 system_pods.go:61] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:00.658973   50624 system_pods.go:61] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:00.658980   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:00.658986   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:00.658992   50624 system_pods.go:61] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:00.658999   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:00.659010   50624 system_pods.go:61] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:00.659018   50624 system_pods.go:61] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:00.659036   50624 system_pods.go:74] duration metric: took 176.530206ms to wait for pod list to return data ...
	I1207 21:21:00.659049   50624 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:00.853965   50624 default_sa.go:45] found service account: "default"
	I1207 21:21:00.853997   50624 default_sa.go:55] duration metric: took 194.939162ms for default service account to be created ...
	I1207 21:21:00.854008   50624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:01.058565   50624 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:01.058594   50624 system_pods.go:89] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:01.058600   50624 system_pods.go:89] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:01.058604   50624 system_pods.go:89] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:01.058609   50624 system_pods.go:89] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:01.058613   50624 system_pods.go:89] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:01.058617   50624 system_pods.go:89] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:01.058634   50624 system_pods.go:89] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:01.058640   50624 system_pods.go:89] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:01.058651   50624 system_pods.go:126] duration metric: took 204.636417ms to wait for k8s-apps to be running ...
	I1207 21:21:01.058664   50624 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:01.058707   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:01.081694   50624 system_svc.go:56] duration metric: took 23.018184ms WaitForService to wait for kubelet.
	I1207 21:21:01.081719   50624 kubeadm.go:581] duration metric: took 5.422310896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:01.081736   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:01.254804   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:01.254838   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:01.254851   50624 node_conditions.go:105] duration metric: took 173.110501ms to run NodePressure ...
	I1207 21:21:01.254866   50624 start.go:228] waiting for startup goroutines ...
	I1207 21:21:01.254875   50624 start.go:233] waiting for cluster config update ...
	I1207 21:21:01.254888   50624 start.go:242] writing updated cluster config ...
	I1207 21:21:01.255260   50624 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:01.312696   50624 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:01.314740   50624 out.go:177] * Done! kubectl is now configured to use "embed-certs-598346" cluster and "default" namespace by default
	I1207 21:20:57.510167   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.202324   51037 pod_ready.go:81] duration metric: took 4m0.000618876s waiting for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:59.202361   51037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:59.202386   51037 pod_ready.go:38] duration metric: took 4m13.59894194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:59.202417   51037 kubeadm.go:640] restartCluster took 4m34.848470509s
	W1207 21:20:59.202490   51037 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:59.202525   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:59.416072   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.416132   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.932924   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.933678   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:04.432068   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:03.914100   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.414149   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.432277   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.432456   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.914660   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:10.927167   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.414941   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.233635   51037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.031083103s)
	I1207 21:21:13.233717   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:13.246941   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:21:13.256697   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:21:13.265143   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:21:13.265188   51037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:21:13.323766   51037 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:21:13.323875   51037 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:21:13.477749   51037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:21:13.477938   51037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:21:13.478083   51037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:21:13.750607   51037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:21:13.752541   51037 out.go:204]   - Generating certificates and keys ...
	I1207 21:21:13.752655   51037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:21:13.752735   51037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:21:13.752887   51037 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:21:13.753031   51037 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:21:13.753250   51037 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:21:13.753432   51037 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:21:13.753647   51037 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:21:13.753850   51037 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:21:13.754167   51037 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:21:13.755114   51037 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:21:13.755889   51037 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:21:13.756020   51037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:21:13.859938   51037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:21:14.193613   51037 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:21:14.239766   51037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:21:14.448306   51037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:21:14.537558   51037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:21:14.538242   51037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:21:14.542910   51037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:21:10.432632   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:12.932769   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.123869   51113 pod_ready.go:81] duration metric: took 4m0.000917841s waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:21:13.123898   51113 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:21:13.123907   51113 pod_ready.go:38] duration metric: took 4m7.926070649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:13.123923   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:13.123951   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:13.124010   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:13.197887   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.197918   51113 cri.go:89] found id: ""
	I1207 21:21:13.197947   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:13.198016   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.203887   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:13.203953   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:13.250727   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:13.250754   51113 cri.go:89] found id: ""
	I1207 21:21:13.250766   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:13.250823   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.255837   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:13.255881   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:13.297690   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:13.297719   51113 cri.go:89] found id: ""
	I1207 21:21:13.297729   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:13.297786   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.303238   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:13.303301   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:13.349838   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:13.349879   51113 cri.go:89] found id: ""
	I1207 21:21:13.349890   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:13.349960   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.354368   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:13.354423   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:13.394201   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.394230   51113 cri.go:89] found id: ""
	I1207 21:21:13.394240   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:13.394298   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.398418   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:13.398489   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:13.443027   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.443055   51113 cri.go:89] found id: ""
	I1207 21:21:13.443065   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:13.443129   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.447530   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:13.447601   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:13.491670   51113 cri.go:89] found id: ""
	I1207 21:21:13.491712   51113 logs.go:284] 0 containers: []
	W1207 21:21:13.491720   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:13.491735   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:13.491795   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:13.541386   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.541414   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:13.541421   51113 cri.go:89] found id: ""
	I1207 21:21:13.541430   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:13.541491   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.546270   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.551524   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:13.551549   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:13.630073   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:13.630119   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.680287   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:13.680318   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.733406   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:13.733442   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:13.751810   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:13.751845   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:13.905859   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:13.905889   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.950595   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:13.950626   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.993833   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:13.993862   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:14.488205   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:14.488242   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:14.531169   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:14.531201   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:14.588229   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:14.588268   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:14.642280   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:14.642310   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:14.693027   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:14.693062   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:14.544787   51037 out.go:204]   - Booting up control plane ...
	I1207 21:21:14.544925   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:21:14.545032   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:21:14.545988   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:21:14.565092   51037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:21:14.566289   51037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:21:14.566356   51037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:21:14.723698   51037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:21:15.913198   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.914942   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.234321   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:17.253156   51113 api_server.go:72] duration metric: took 4m17.441427611s to wait for apiserver process to appear ...
	I1207 21:21:17.253187   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:17.253223   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:17.253330   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:17.301526   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.301557   51113 cri.go:89] found id: ""
	I1207 21:21:17.301573   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:17.301631   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.306049   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:17.306124   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:17.359167   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:17.359195   51113 cri.go:89] found id: ""
	I1207 21:21:17.359205   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:17.359264   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.363853   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:17.363919   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:17.403245   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:17.403271   51113 cri.go:89] found id: ""
	I1207 21:21:17.403281   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:17.403345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.407694   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:17.407771   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:17.462260   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.462287   51113 cri.go:89] found id: ""
	I1207 21:21:17.462298   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:17.462355   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.467157   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:17.467214   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:17.502206   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:17.502236   51113 cri.go:89] found id: ""
	I1207 21:21:17.502246   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:17.502301   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.507601   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:17.507672   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:17.550248   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:17.550275   51113 cri.go:89] found id: ""
	I1207 21:21:17.550284   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:17.550345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.554817   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:17.554879   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:17.595234   51113 cri.go:89] found id: ""
	I1207 21:21:17.595262   51113 logs.go:284] 0 containers: []
	W1207 21:21:17.595272   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:17.595280   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:17.595331   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:17.657464   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.657491   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.657501   51113 cri.go:89] found id: ""
	I1207 21:21:17.657511   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:17.657566   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.662364   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.667878   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:17.667901   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.716160   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:17.716187   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.770503   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:17.770548   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.836877   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:17.836933   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.881499   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:17.881536   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:17.930792   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:17.930837   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:17.945486   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:17.945519   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:18.087782   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:18.087825   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:18.149272   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:18.149312   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:18.196792   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:18.196829   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:18.243539   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:18.243575   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:18.305424   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:18.305465   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:18.772176   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:18.772213   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:19.916426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.414318   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.728616   51037 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002882 seconds
	I1207 21:21:22.745711   51037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:21:22.772747   51037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:21:23.310807   51037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:21:23.311004   51037 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-950431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:21:23.826933   51037 kubeadm.go:322] [bootstrap-token] Using token: ft70hz.nx8ps5rcldht4kzk
	I1207 21:21:23.828530   51037 out.go:204]   - Configuring RBAC rules ...
	I1207 21:21:23.828676   51037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:21:23.836739   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:21:23.845207   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:21:23.852566   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:21:23.856912   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:21:23.863418   51037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:21:23.881183   51037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:21:24.185664   51037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:21:24.246564   51037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:21:24.246626   51037 kubeadm.go:322] 
	I1207 21:21:24.246741   51037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:21:24.246761   51037 kubeadm.go:322] 
	I1207 21:21:24.246858   51037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:21:24.246868   51037 kubeadm.go:322] 
	I1207 21:21:24.246898   51037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:21:24.246967   51037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:21:24.247047   51037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:21:24.247063   51037 kubeadm.go:322] 
	I1207 21:21:24.247122   51037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:21:24.247132   51037 kubeadm.go:322] 
	I1207 21:21:24.247183   51037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:21:24.247193   51037 kubeadm.go:322] 
	I1207 21:21:24.247259   51037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:21:24.247361   51037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:21:24.247450   51037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:21:24.247461   51037 kubeadm.go:322] 
	I1207 21:21:24.247565   51037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:21:24.247669   51037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:21:24.247678   51037 kubeadm.go:322] 
	I1207 21:21:24.247777   51037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.247910   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:21:24.247941   51037 kubeadm.go:322] 	--control-plane 
	I1207 21:21:24.247951   51037 kubeadm.go:322] 
	I1207 21:21:24.248049   51037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:21:24.248059   51037 kubeadm.go:322] 
	I1207 21:21:24.248150   51037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.248271   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:21:24.249001   51037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:21:24.249031   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:21:24.249041   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:21:24.250938   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:21:21.338084   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:21:21.343250   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:21:21.344871   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:21.344892   51113 api_server.go:131] duration metric: took 4.091697961s to wait for apiserver health ...
	I1207 21:21:21.344901   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:21.344930   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:21.344990   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:21.385908   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:21.385944   51113 cri.go:89] found id: ""
	I1207 21:21:21.385954   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:21.386011   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.390584   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:21.390655   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:21.435206   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:21.435226   51113 cri.go:89] found id: ""
	I1207 21:21:21.435236   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:21.435294   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.441020   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:21.441091   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:21.480294   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:21.480319   51113 cri.go:89] found id: ""
	I1207 21:21:21.480329   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:21.480384   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.484454   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:21.484511   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:21.531792   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:21.531817   51113 cri.go:89] found id: ""
	I1207 21:21:21.531826   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:21.531884   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.536194   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:21.536265   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:21.579784   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:21.579803   51113 cri.go:89] found id: ""
	I1207 21:21:21.579810   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:21.579852   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.583895   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:21.583961   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:21.623350   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:21.623383   51113 cri.go:89] found id: ""
	I1207 21:21:21.623393   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:21.623450   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.628173   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:21.628226   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:21.670522   51113 cri.go:89] found id: ""
	I1207 21:21:21.670549   51113 logs.go:284] 0 containers: []
	W1207 21:21:21.670559   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:21.670565   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:21.670622   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:21.717892   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:21.717918   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:21.717939   51113 cri.go:89] found id: ""
	I1207 21:21:21.717958   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:21.718024   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.724161   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.728796   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:21.728817   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:21.743574   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:21.743599   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:22.158202   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:22.158247   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:22.224569   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:22.224610   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:22.376503   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:22.376539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:22.421207   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:22.421236   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:22.468100   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:22.468130   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:22.514216   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:22.514246   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:22.563190   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:22.563217   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:22.622636   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:22.622673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:22.673280   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:22.673309   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:22.724767   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:22.724799   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:22.787505   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:22.787539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:25.337268   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:25.337297   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.337304   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.337312   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.337319   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.337325   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.337331   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.337338   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.337347   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.337354   51113 system_pods.go:74] duration metric: took 3.99244703s to wait for pod list to return data ...
	I1207 21:21:25.337363   51113 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:25.340607   51113 default_sa.go:45] found service account: "default"
	I1207 21:21:25.340630   51113 default_sa.go:55] duration metric: took 3.261042ms for default service account to be created ...
	I1207 21:21:25.340637   51113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:25.351616   51113 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:25.351640   51113 system_pods.go:89] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.351646   51113 system_pods.go:89] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.351651   51113 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.351656   51113 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.351659   51113 system_pods.go:89] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.351663   51113 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.351670   51113 system_pods.go:89] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.351675   51113 system_pods.go:89] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.351681   51113 system_pods.go:126] duration metric: took 11.04015ms to wait for k8s-apps to be running ...
	I1207 21:21:25.351686   51113 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:25.351725   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:25.368853   51113 system_svc.go:56] duration metric: took 17.156347ms WaitForService to wait for kubelet.
	I1207 21:21:25.368883   51113 kubeadm.go:581] duration metric: took 4m25.557159696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:25.368908   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:25.372224   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:25.372247   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:25.372257   51113 node_conditions.go:105] duration metric: took 3.343495ms to run NodePressure ...
	I1207 21:21:25.372268   51113 start.go:228] waiting for startup goroutines ...
	I1207 21:21:25.372273   51113 start.go:233] waiting for cluster config update ...
	I1207 21:21:25.372282   51113 start.go:242] writing updated cluster config ...
	I1207 21:21:25.372598   51113 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:25.426941   51113 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:25.429177   51113 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-275828" cluster and "default" namespace by default
	I1207 21:21:24.252623   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:21:24.278852   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:21:24.346081   51037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:21:24.346144   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.346161   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=no-preload-950431 minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.458044   51037 ops.go:34] apiserver oom_adj: -16
	I1207 21:21:24.715413   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.801098   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.396467   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.895918   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:26.396185   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.914616   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.915500   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.896260   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.396455   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.896542   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.396551   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.896865   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.395921   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.896782   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.396223   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.896296   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:31.395834   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.414005   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.415580   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.896019   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.395959   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.895826   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.396820   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.896674   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.396109   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.896537   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.396438   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.896709   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.396689   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.896404   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:37.062200   51037 kubeadm.go:1088] duration metric: took 12.716124423s to wait for elevateKubeSystemPrivileges.
	I1207 21:21:37.062237   51037 kubeadm.go:406] StartCluster complete in 5m12.769835709s
	I1207 21:21:37.062255   51037 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.062333   51037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:21:37.064828   51037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.065103   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:21:37.065193   51037 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:21:37.065273   51037 addons.go:69] Setting storage-provisioner=true in profile "no-preload-950431"
	I1207 21:21:37.065291   51037 addons.go:231] Setting addon storage-provisioner=true in "no-preload-950431"
	W1207 21:21:37.065299   51037 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:21:37.065297   51037 addons.go:69] Setting default-storageclass=true in profile "no-preload-950431"
	I1207 21:21:37.065323   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:21:37.065329   51037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950431"
	I1207 21:21:37.065349   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065302   51037 addons.go:69] Setting metrics-server=true in profile "no-preload-950431"
	I1207 21:21:37.065374   51037 addons.go:231] Setting addon metrics-server=true in "no-preload-950431"
	W1207 21:21:37.065388   51037 addons.go:240] addon metrics-server should already be in state true
	I1207 21:21:37.065423   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065737   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065780   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065772   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065821   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.083129   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1207 21:21:37.083593   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I1207 21:21:37.083761   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084047   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084356   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I1207 21:21:37.084566   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084590   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084625   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084645   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084667   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084935   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.084997   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085044   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.085065   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.085381   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085505   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085542   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.085741   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.085909   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085964   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.089134   51037 addons.go:231] Setting addon default-storageclass=true in "no-preload-950431"
	W1207 21:21:37.089153   51037 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:21:37.089180   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.089673   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.089712   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.101048   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1207 21:21:37.101516   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.102279   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.102300   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.102727   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.103618   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.106122   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.107693   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I1207 21:21:37.107843   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I1207 21:21:37.108128   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108521   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108696   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.108709   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.109070   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.109204   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.109227   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.114090   51037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:21:37.109833   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.109949   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.115707   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.115743   51037 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.115765   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:21:37.115789   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.116919   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.119056   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.120429   51037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:21:37.121716   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:21:37.121741   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:21:37.121759   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.119470   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.121830   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.121852   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.120097   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.122062   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.122309   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.122432   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.124738   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.124992   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.125012   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.125346   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.125523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.125647   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.125817   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.136943   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1207 21:21:37.137636   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.138210   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.138233   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.138659   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.138896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.140541   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.140792   51037 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.140808   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:21:37.140824   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.144251   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144616   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.144667   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144856   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.145009   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.145167   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.145260   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.157909   51037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-950431" context rescaled to 1 replicas
	I1207 21:21:37.157965   51037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:21:37.159529   51037 out.go:177] * Verifying Kubernetes components...
	I1207 21:21:33.914686   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:35.916902   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:38.413489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:37.160895   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:37.329265   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:21:37.476842   51037 node_ready.go:35] waiting up to 6m0s for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481433   51037 node_ready.go:49] node "no-preload-950431" has status "Ready":"True"
	I1207 21:21:37.481456   51037 node_ready.go:38] duration metric: took 4.57457ms waiting for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481467   51037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:37.499564   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:37.556110   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:21:37.556142   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:21:37.558917   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.575696   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.653458   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:21:37.653478   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:21:37.782294   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:37.782322   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:21:37.850657   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:38.161232   51037 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1207 21:21:38.734356   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.175402881s)
	I1207 21:21:38.734410   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734420   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734423   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.158690213s)
	I1207 21:21:38.734466   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734482   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734859   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734873   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734860   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.734911   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.734927   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734935   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734913   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735006   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735016   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.735028   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.735166   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735192   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735321   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.735357   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735369   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.772677   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.772700   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.772969   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.773038   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.773055   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.056990   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206289914s)
	I1207 21:21:39.057048   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057064   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057441   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:39.057480   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057502   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057520   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057534   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057809   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057826   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057845   51037 addons.go:467] Verifying addon metrics-server=true in "no-preload-950431"
	I1207 21:21:39.060003   51037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:21:39.061797   51037 addons.go:502] enable addons completed in 1.996609653s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:21:39.690111   51037 pod_ready.go:102] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:40.698712   51037 pod_ready.go:92] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.698739   51037 pod_ready.go:81] duration metric: took 3.199144567s waiting for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.698751   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714087   51037 pod_ready.go:92] pod "coredns-76f75df574-hsjsq" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.714108   51037 pod_ready.go:81] duration metric: took 15.350128ms waiting for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714117   51037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725058   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.725078   51037 pod_ready.go:81] duration metric: took 10.955777ms waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725089   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742099   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.742127   51037 pod_ready.go:81] duration metric: took 17.029172ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742140   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748676   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.748699   51037 pod_ready.go:81] duration metric: took 6.549805ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748713   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988512   51037 pod_ready.go:92] pod "kube-proxy-6v8td" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:41.988537   51037 pod_ready.go:81] duration metric: took 1.239816309s waiting for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988545   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283301   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:42.283330   51037 pod_ready.go:81] duration metric: took 294.777559ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283341   51037 pod_ready.go:38] duration metric: took 4.801864648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:42.283360   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:42.283420   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:42.308983   51037 api_server.go:72] duration metric: took 5.150987572s to wait for apiserver process to appear ...
	I1207 21:21:42.309013   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:42.309036   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:21:42.315006   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:21:42.316220   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:21:42.316240   51037 api_server.go:131] duration metric: took 7.219959ms to wait for apiserver health ...
	I1207 21:21:42.316247   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:42.485186   51037 system_pods.go:59] 9 kube-system pods found
	I1207 21:21:42.485214   51037 system_pods.go:61] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.485218   51037 system_pods.go:61] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.485223   51037 system_pods.go:61] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.485228   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.485234   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.485237   51037 system_pods.go:61] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.485242   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.485251   51037 system_pods.go:61] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.485258   51037 system_pods.go:61] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.485278   51037 system_pods.go:74] duration metric: took 169.025303ms to wait for pod list to return data ...
	I1207 21:21:42.485287   51037 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:42.680542   51037 default_sa.go:45] found service account: "default"
	I1207 21:21:42.680569   51037 default_sa.go:55] duration metric: took 195.272707ms for default service account to be created ...
	I1207 21:21:42.680577   51037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:42.890877   51037 system_pods.go:86] 9 kube-system pods found
	I1207 21:21:42.890927   51037 system_pods.go:89] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.890933   51037 system_pods.go:89] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.890938   51037 system_pods.go:89] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.890942   51037 system_pods.go:89] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.890946   51037 system_pods.go:89] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.890950   51037 system_pods.go:89] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.890954   51037 system_pods.go:89] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.890960   51037 system_pods.go:89] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.890965   51037 system_pods.go:89] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.890973   51037 system_pods.go:126] duration metric: took 210.38383ms to wait for k8s-apps to be running ...
	I1207 21:21:42.890979   51037 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:42.891021   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:42.907279   51037 system_svc.go:56] duration metric: took 16.290689ms WaitForService to wait for kubelet.
	I1207 21:21:42.907306   51037 kubeadm.go:581] duration metric: took 5.749318034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:42.907328   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:43.081361   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:43.081390   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:43.081401   51037 node_conditions.go:105] duration metric: took 174.067442ms to run NodePressure ...
	I1207 21:21:43.081412   51037 start.go:228] waiting for startup goroutines ...
	I1207 21:21:43.081420   51037 start.go:233] waiting for cluster config update ...
	I1207 21:21:43.081433   51037 start.go:242] writing updated cluster config ...
	I1207 21:21:43.081691   51037 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:43.131409   51037 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1207 21:21:43.133483   51037 out.go:177] * Done! kubectl is now configured to use "no-preload-950431" cluster and "default" namespace by default
	I1207 21:21:40.414676   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:42.913795   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:44.914599   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:47.414431   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:49.913391   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:51.914426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:53.915196   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:55.923342   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:58.413783   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:00.414241   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:02.414435   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:04.913358   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:06.913909   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:08.915098   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:11.414320   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:13.414489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:15.913521   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:18.415215   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:19.107244   50270 pod_ready.go:81] duration metric: took 4m0.000150933s waiting for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:19.107300   50270 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:22:19.107323   50270 pod_ready.go:38] duration metric: took 4m1.199790563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:19.107355   50270 kubeadm.go:640] restartCluster took 5m20.261390035s
	W1207 21:22:19.107437   50270 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:22:19.107470   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:22:26.124587   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (7.017092462s)
	I1207 21:22:26.124664   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:26.139323   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:22:26.150243   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:22:26.164289   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:22:26.164356   50270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 21:22:26.390137   50270 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:22:39.046001   50270 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1207 21:22:39.046063   50270 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:22:39.046164   50270 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:22:39.046322   50270 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:22:39.046454   50270 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:22:39.046581   50270 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:22:39.046685   50270 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:22:39.046759   50270 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1207 21:22:39.046836   50270 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:22:39.048426   50270 out.go:204]   - Generating certificates and keys ...
	I1207 21:22:39.048532   50270 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:22:39.048617   50270 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:22:39.048713   50270 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:22:39.048808   50270 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:22:39.048899   50270 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:22:39.048977   50270 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:22:39.049066   50270 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:22:39.049151   50270 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:22:39.049254   50270 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:22:39.049341   50270 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:22:39.049396   50270 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:22:39.049496   50270 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:22:39.049578   50270 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:22:39.049671   50270 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:22:39.049758   50270 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:22:39.049829   50270 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:22:39.049884   50270 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:22:39.051499   50270 out.go:204]   - Booting up control plane ...
	I1207 21:22:39.051604   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:22:39.051706   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:22:39.051778   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:22:39.051841   50270 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:22:39.052043   50270 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:22:39.052137   50270 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502878 seconds
	I1207 21:22:39.052296   50270 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:22:39.052458   50270 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:22:39.052537   50270 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:22:39.052714   50270 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-483745 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 21:22:39.052802   50270 kubeadm.go:322] [bootstrap-token] Using token: 88595b.vk24k0k7lcyxvxlg
	I1207 21:22:39.054142   50270 out.go:204]   - Configuring RBAC rules ...
	I1207 21:22:39.054250   50270 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:22:39.054369   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:22:39.054470   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:22:39.054565   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:22:39.054675   50270 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:22:39.054740   50270 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:22:39.054805   50270 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:22:39.054813   50270 kubeadm.go:322] 
	I1207 21:22:39.054905   50270 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:22:39.054917   50270 kubeadm.go:322] 
	I1207 21:22:39.054996   50270 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:22:39.055004   50270 kubeadm.go:322] 
	I1207 21:22:39.055031   50270 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:22:39.055107   50270 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:22:39.055174   50270 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:22:39.055187   50270 kubeadm.go:322] 
	I1207 21:22:39.055254   50270 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:22:39.055366   50270 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:22:39.055467   50270 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:22:39.055476   50270 kubeadm.go:322] 
	I1207 21:22:39.055565   50270 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1207 21:22:39.055655   50270 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:22:39.055663   50270 kubeadm.go:322] 
	I1207 21:22:39.055776   50270 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.055929   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:22:39.055969   50270 kubeadm.go:322]     --control-plane 	  
	I1207 21:22:39.055979   50270 kubeadm.go:322] 
	I1207 21:22:39.056099   50270 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:22:39.056111   50270 kubeadm.go:322] 
	I1207 21:22:39.056215   50270 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.056371   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:22:39.056402   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:22:39.056414   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:22:39.058073   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:22:39.059659   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:22:39.078052   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:22:39.118479   50270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:22:39.118540   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=old-k8s-version-483745 minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.118551   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.149391   50270 ops.go:34] apiserver oom_adj: -16
	I1207 21:22:39.334606   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.476182   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.075027   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.574693   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.074497   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.575214   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.075168   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.575162   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.074671   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.575406   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.074823   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.574597   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.575119   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.075437   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.575138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.575171   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.074939   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.574679   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.075065   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.574571   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.074553   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.575129   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.075320   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.574806   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.075136   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.575144   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.075139   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.575394   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.075185   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.274051   50270 kubeadm.go:1088] duration metric: took 15.155559482s to wait for elevateKubeSystemPrivileges.
	I1207 21:22:54.274092   50270 kubeadm.go:406] StartCluster complete in 5m55.488226201s
	I1207 21:22:54.274140   50270 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.274247   50270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:22:54.276679   50270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.276902   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:22:54.276991   50270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:22:54.277064   50270 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277090   50270 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-483745"
	W1207 21:22:54.277103   50270 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:22:54.277101   50270 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277089   50270 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277116   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:22:54.277127   50270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-483745"
	I1207 21:22:54.277152   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277119   50270 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-483745"
	W1207 21:22:54.277169   50270 addons.go:240] addon metrics-server should already be in state true
	I1207 21:22:54.277208   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277529   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277564   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277573   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277581   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277591   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277612   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.293696   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1207 21:22:54.293908   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1207 21:22:54.294118   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.294622   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.294642   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.294656   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.295100   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.295119   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.295182   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295512   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295671   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.295709   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1207 21:22:54.295752   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.295791   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.296131   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.296662   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.296681   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.297077   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.297597   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.297635   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.299605   50270 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-483745"
	W1207 21:22:54.299630   50270 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:22:54.299658   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.300047   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.300087   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.314531   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1207 21:22:54.315168   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.315718   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.315804   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1207 21:22:54.315809   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.316447   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.316491   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.316657   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.316979   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.317005   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.317340   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.317887   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.317945   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.319086   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.321272   50270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:22:54.320074   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1207 21:22:54.322834   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:22:54.322849   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:22:54.322863   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.323218   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.323677   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.323689   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.323997   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.324166   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.326460   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.328172   50270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:22:54.327148   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.328366   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.329567   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.329588   50270 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.329593   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.329600   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:22:54.329613   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.329725   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.329909   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.330088   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.333435   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334161   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.334192   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334480   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.334786   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.334959   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.335091   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.336340   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I1207 21:22:54.336672   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.337021   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.337034   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.337316   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.337486   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.338808   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.339043   50270 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.339053   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:22:54.339064   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.341591   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.341937   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.341960   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.342127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.342285   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.342453   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.342592   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.385908   50270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-483745" context rescaled to 1 replicas
	I1207 21:22:54.385959   50270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:22:54.387637   50270 out.go:177] * Verifying Kubernetes components...
	I1207 21:22:54.388616   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:54.604286   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.671574   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:22:54.671601   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:22:54.752688   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:22:54.752714   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:22:54.792943   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.847458   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.847489   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:22:54.916698   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.931860   50270 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:54.931924   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:22:55.152010   50270 node_ready.go:49] node "old-k8s-version-483745" has status "Ready":"True"
	I1207 21:22:55.152041   50270 node_ready.go:38] duration metric: took 220.147741ms waiting for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:55.152055   50270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:55.356283   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:55.654243   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049922238s)
	I1207 21:22:55.654296   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654313   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.654661   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.654687   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.654694   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.654703   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654715   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.655010   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.655052   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.693855   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.693876   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.694176   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.694197   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.927642   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13465835s)
	I1207 21:22:55.927714   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.927731   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928056   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928076   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.928087   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.928096   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.928413   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928428   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.033797   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117050773s)
	I1207 21:22:56.033845   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.101898699s)
	I1207 21:22:56.033881   50270 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1207 21:22:56.033850   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.033918   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034207   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034220   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034229   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.034236   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034460   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034480   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034516   50270 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-483745"
	I1207 21:22:56.036701   50270 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1207 21:22:56.038078   50270 addons.go:502] enable addons completed in 1.76109636s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1207 21:22:57.718454   50270 pod_ready.go:102] pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:58.708880   50270 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708910   50270 pod_ready.go:81] duration metric: took 3.352602717s waiting for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:58.708920   50270 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708930   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715179   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.715205   50270 pod_ready.go:81] duration metric: took 6.268335ms waiting for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715219   50270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720511   50270 pod_ready.go:92] pod "kube-proxy-42fzb" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.720526   50270 pod_ready.go:81] duration metric: took 5.302238ms waiting for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720544   50270 pod_ready.go:38] duration metric: took 3.568467628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:58.720558   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:22:58.720609   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:22:58.737687   50270 api_server.go:72] duration metric: took 4.351680673s to wait for apiserver process to appear ...
	I1207 21:22:58.737712   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:22:58.737730   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:22:58.744722   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:22:58.745867   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:22:58.745887   50270 api_server.go:131] duration metric: took 8.167725ms to wait for apiserver health ...
	I1207 21:22:58.745897   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:22:58.750259   50270 system_pods.go:59] 4 kube-system pods found
	I1207 21:22:58.750278   50270 system_pods.go:61] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.750283   50270 system_pods.go:61] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.750292   50270 system_pods.go:61] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.750306   50270 system_pods.go:61] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.750319   50270 system_pods.go:74] duration metric: took 4.415504ms to wait for pod list to return data ...
	I1207 21:22:58.750328   50270 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:22:58.753151   50270 default_sa.go:45] found service account: "default"
	I1207 21:22:58.753173   50270 default_sa.go:55] duration metric: took 2.836309ms for default service account to be created ...
	I1207 21:22:58.753181   50270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:22:58.757164   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.757188   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.757195   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.757212   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.757223   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.757246   50270 retry.go:31] will retry after 195.542562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:58.957411   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.957443   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.957451   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.957461   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.957471   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.957494   50270 retry.go:31] will retry after 294.291725ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.264559   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.264599   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.264608   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.264620   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.264632   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.264651   50270 retry.go:31] will retry after 392.704433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.663939   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.663967   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.663973   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.663979   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.663985   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.664003   50270 retry.go:31] will retry after 598.787872ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.268415   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.268441   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.268447   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.268453   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.268458   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.268472   50270 retry.go:31] will retry after 554.6659ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.829267   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.829293   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.829299   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.829305   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.829309   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.829325   50270 retry.go:31] will retry after 832.708436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:01.667497   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:01.667526   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:01.667532   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:01.667539   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:01.667543   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:01.667560   50270 retry.go:31] will retry after 824.504206ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:02.497009   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:02.497033   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:02.497038   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:02.497045   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:02.497049   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:02.497064   50270 retry.go:31] will retry after 1.335460815s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:03.837788   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:03.837816   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:03.837821   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:03.837828   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:03.837833   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:03.837848   50270 retry.go:31] will retry after 1.185883705s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:05.028679   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:05.028712   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:05.028721   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:05.028731   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:05.028738   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:05.028758   50270 retry.go:31] will retry after 2.162817833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:07.196435   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:07.196468   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:07.196476   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:07.196485   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:07.196493   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:07.196512   50270 retry.go:31] will retry after 2.853202831s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:10.054277   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:10.054303   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:10.054308   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:10.054315   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:10.054320   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:10.054335   50270 retry.go:31] will retry after 3.392213767s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:13.452019   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:13.452046   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:13.452052   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:13.452059   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:13.452064   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:13.452081   50270 retry.go:31] will retry after 3.42315118s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:16.882830   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:16.882856   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:16.882861   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:16.882868   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:16.882873   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:16.882887   50270 retry.go:31] will retry after 3.42232982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:20.310740   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:20.310766   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:20.310771   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:20.310780   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:20.310785   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:20.310801   50270 retry.go:31] will retry after 6.110306117s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:26.426492   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:26.426520   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:26.426525   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:26.426532   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:26.426537   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:26.426554   50270 retry.go:31] will retry after 5.458076236s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:31.890544   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:31.890575   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:31.890580   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:31.890589   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:31.890593   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:31.890611   50270 retry.go:31] will retry after 10.030622922s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:41.928589   50270 system_pods.go:86] 6 kube-system pods found
	I1207 21:23:41.928622   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:41.928630   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:41.928637   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:41.928642   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:41.928651   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:41.928659   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:41.928677   50270 retry.go:31] will retry after 11.183539963s: missing components: kube-controller-manager, kube-scheduler
	I1207 21:23:53.119257   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:23:53.119284   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:53.119292   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:53.119298   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:53.119304   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Pending
	I1207 21:23:53.119309   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:53.119315   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:23:53.119325   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:53.119332   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:53.119353   50270 retry.go:31] will retry after 13.123307809s: missing components: kube-controller-manager
	I1207 21:24:06.249016   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:24:06.249042   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:24:06.249048   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:24:06.249054   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:24:06.249059   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Running
	I1207 21:24:06.249064   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:24:06.249068   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:24:06.249074   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:24:06.249079   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:24:06.249087   50270 system_pods.go:126] duration metric: took 1m7.495900916s to wait for k8s-apps to be running ...
	I1207 21:24:06.249092   50270 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:24:06.249137   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:24:06.265801   50270 system_svc.go:56] duration metric: took 16.700976ms WaitForService to wait for kubelet.
	I1207 21:24:06.265820   50270 kubeadm.go:581] duration metric: took 1m11.879821949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:24:06.265837   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:24:06.269326   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:24:06.269346   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:24:06.269356   50270 node_conditions.go:105] duration metric: took 3.51576ms to run NodePressure ...
	I1207 21:24:06.269366   50270 start.go:228] waiting for startup goroutines ...
	I1207 21:24:06.269371   50270 start.go:233] waiting for cluster config update ...
	I1207 21:24:06.269384   50270 start.go:242] writing updated cluster config ...
	I1207 21:24:06.269660   50270 ssh_runner.go:195] Run: rm -f paused
	I1207 21:24:06.317992   50270 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1207 21:24:06.320122   50270 out.go:177] 
	W1207 21:24:06.321437   50270 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1207 21:24:06.322708   50270 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1207 21:24:06.324092   50270 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-483745" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:16:40 UTC, ends at Thu 2023-12-07 21:33:08 UTC. --
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.029287527Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:731e3a79ab38f46155ab50f53b2e7c6164f3f44ecfe796bc39b4692d157b14d4,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-tppp6,Uid:9204fc2a-3771-4b93-9e41-faa1cf036232,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984177213655255,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-tppp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9204fc2a-3771-4b93-9e41-faa1cf036232,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:22:56.850325863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5497aade-c717-4eb1-8cfd-d8f9122965
6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984177163945022,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-07T21:22:55.926729139Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-zv7xv,Uid:44eb0c7e-6ec5-4ff8-95f3-869272f00080,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984174291218919,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:22:53.935366609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&PodSandboxMetadata{Name:kube-proxy-42fzb,Uid:66e47a27-187e-4c1b-9d7
4-222927a4d2f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984174127320360,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:22:53.783187528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-483745,Uid:5fe0a20b7f23231c3534616bc9499b9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984148498834593,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,tier: contr
ol-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5fe0a20b7f23231c3534616bc9499b9e,kubernetes.io/config.seen: 2023-12-07T21:22:28.044050472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-483745,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984148487765537,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-12-07T21:22:28.040842813Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9948bd40f128298e2734ac22d311
35b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-483745,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701984148455972362,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-07T21:22:28.042485178Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-483745,Uid:156baefbb5614920114043110edcae59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983830739890464,Labels:map[string]string{component: kube-apiserver,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 156baefbb5614920114043110edcae59,kubernetes.io/config.seen: 2023-12-07T21:17:10.271025149Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8f03e5da-d436-4a1e-9543-6737a038d436 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.029884136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a83737cd-d5af-4369-8590-5ad9a55e1ee1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.029940167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a83737cd-d5af-4369-8590-5ad9a55e1ee1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.030124907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a83737cd-d5af-4369-8590-5ad9a55e1ee1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.043158593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=796d72b4-602e-4a93-8ad6-1f535afe3c8f name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.043261406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=796d72b4-602e-4a93-8ad6-1f535afe3c8f name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.046132028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7b89dfbe-e9ef-406e-b6ab-4572ae7b1360 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.046657403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984788046642103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7b89dfbe-e9ef-406e-b6ab-4572ae7b1360 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.047454863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=680184b7-8f4a-4ddd-8c4e-4b18f7429d8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.047664467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=680184b7-8f4a-4ddd-8c4e-4b18f7429d8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.047908216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=680184b7-8f4a-4ddd-8c4e-4b18f7429d8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.091118116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d5726091-9a14-405f-adbc-6ee1e019471d name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.091204157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d5726091-9a14-405f-adbc-6ee1e019471d name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.093175170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3af0ff98-4f37-4906-938c-3eb51a978f22 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.093679634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984788093663069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3af0ff98-4f37-4906-938c-3eb51a978f22 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.094380320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c0540ea-1bef-479d-b297-144b2f5dbb61 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.094462603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c0540ea-1bef-479d-b297-144b2f5dbb61 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.094736288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c0540ea-1bef-479d-b297-144b2f5dbb61 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.132637684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ddd8c263-f9ca-4144-93d9-4106da3ae971 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.132727110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ddd8c263-f9ca-4144-93d9-4106da3ae971 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.135194996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b5eb8cc2-5178-4ba1-86c2-0d514dc201e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.135677178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984788135663801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b5eb8cc2-5178-4ba1-86c2-0d514dc201e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.136287100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e3cf47b-a5b9-46b3-8b7b-3f5818436f38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.136364602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e3cf47b-a5b9-46b3-8b7b-3f5818436f38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:33:08 old-k8s-version-483745 crio[716]: time="2023-12-07 21:33:08.136607462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e3cf47b-a5b9-46b3-8b7b-3f5818436f38 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58a8ba392edec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   13a5a78f32806       storage-provisioner
	afc88f109dc54       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   88d22b1eab850       kube-proxy-42fzb
	d0b63104cb805       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   4e1076c3f7177       coredns-5644d7b6d9-zv7xv
	331295ad8803a       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   eb8dccc9edfd7       etcd-old-k8s-version-483745
	f867a27214531       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   b20cab56fbb70       kube-controller-manager-old-k8s-version-483745
	b4933bc8ac875       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   9948bd40f1282       kube-scheduler-old-k8s-version-483745
	e0cf790766fb6       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   de1f3bc1e6e08       kube-apiserver-old-k8s-version-483745
	bc89bc78e6155       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   de1f3bc1e6e08       kube-apiserver-old-k8s-version-483745
	
	* 
	* ==> coredns [d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07] <==
	* .:53
	2023-12-07T21:22:55.291Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-07T21:22:55.291Z [INFO] CoreDNS-1.6.2
	2023-12-07T21:22:55.291Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-07T21:23:23.498Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-483745
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-483745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=old-k8s-version-483745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:32:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:32:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:32:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:32:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.171
	  Hostname:    old-k8s-version-483745
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0503ac9ce1204b71b58758b2e780119d
	 System UUID:                0503ac9c-e120-4b71-b587-58b2e780119d
	 Boot ID:                    212aa850-f933-41b5-9d74-0efafc1dcbb0
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-zv7xv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-483745                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                kube-apiserver-old-k8s-version-483745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                kube-controller-manager-old-k8s-version-483745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                kube-proxy-42fzb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-483745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                metrics-server-74d5856cc6-tppp6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-483745  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069386] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.713381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.628672] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.587213] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.284385] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.130545] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.165726] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.127736] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.228557] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Dec 7 21:17] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.462761] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.525890] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 7 21:18] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 7 21:22] systemd-fstab-generator[3144]: Ignoring "noauto" for root device
	[ +27.586538] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 7 21:23] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046] <==
	* 2023-12-07 21:22:30.552006 I | raft: 136fc2291504415a became follower at term 0
	2023-12-07 21:22:30.552018 I | raft: newRaft 136fc2291504415a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-07 21:22:30.552023 I | raft: 136fc2291504415a became follower at term 1
	2023-12-07 21:22:30.566443 W | auth: simple token is not cryptographically signed
	2023-12-07 21:22:30.571047 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-07 21:22:30.572394 I | etcdserver: 136fc2291504415a as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-07 21:22:30.573130 I | etcdserver/membership: added member 136fc2291504415a [https://192.168.61.171:2380] to cluster c5390b31b9ec6b0f
	2023-12-07 21:22:30.573888 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-07 21:22:30.574066 I | embed: listening for metrics on http://192.168.61.171:2381
	2023-12-07 21:22:30.574235 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-07 21:22:31.052750 I | raft: 136fc2291504415a is starting a new election at term 1
	2023-12-07 21:22:31.052891 I | raft: 136fc2291504415a became candidate at term 2
	2023-12-07 21:22:31.052904 I | raft: 136fc2291504415a received MsgVoteResp from 136fc2291504415a at term 2
	2023-12-07 21:22:31.053023 I | raft: 136fc2291504415a became leader at term 2
	2023-12-07 21:22:31.053031 I | raft: raft.node: 136fc2291504415a elected leader 136fc2291504415a at term 2
	2023-12-07 21:22:31.053306 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-07 21:22:31.055131 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-07 21:22:31.055289 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-07 21:22:31.055331 I | etcdserver: published {Name:old-k8s-version-483745 ClientURLs:[https://192.168.61.171:2379]} to cluster c5390b31b9ec6b0f
	2023-12-07 21:22:31.055350 I | embed: ready to serve client requests
	2023-12-07 21:22:31.055496 I | embed: ready to serve client requests
	2023-12-07 21:22:31.056853 I | embed: serving client requests on 192.168.61.171:2379
	2023-12-07 21:22:31.058877 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-07 21:32:31.079252 I | mvcc: store.index: compact 667
	2023-12-07 21:32:31.082211 I | mvcc: finished scheduled compaction at 667 (took 2.532887ms)
	
	* 
	* ==> kernel <==
	*  21:33:08 up 16 min,  0 users,  load average: 0.11, 0.12, 0.13
	Linux old-k8s-version-483745 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0] <==
	* W1207 21:22:25.868022       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.876051       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.880318       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.911461       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.927216       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.928058       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.942936       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.945262       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.948081       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.949766       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.963329       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.998281       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.000478       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.009999       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.026139       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.026987       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.029166       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.042407       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.047806       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.066680       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.070070       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.083890       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.084728       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.087746       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.113518       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023] <==
	* I1207 21:25:57.541656       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:25:57.542086       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:25:57.542206       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:25:57.542287       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:27:35.280102       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:27:35.280216       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:27:35.280287       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:27:35.280295       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:28:35.280813       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:28:35.281092       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:28:35.281198       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:28:35.281236       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:30:35.281796       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:30:35.282086       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:30:35.282174       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:30:35.282197       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:32:35.284460       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:32:35.284956       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:32:35.285190       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:32:35.285262       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c] <==
	* E1207 21:26:56.262310       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:27:10.308151       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:27:26.514489       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:27:42.310078       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:27:56.766778       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:28:14.312004       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:28:27.018864       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:28:46.313911       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:28:57.271334       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:29:18.316120       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:29:27.523374       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:29:50.318196       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:29:57.775288       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:30:22.320198       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:30:28.027400       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:30:54.322319       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:30:58.279208       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:31:26.324831       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:31:28.531344       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:31:58.327005       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:31:58.783482       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1207 21:32:29.035180       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:32:30.328867       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:32:59.287073       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:33:02.331234       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7] <==
	* W1207 21:22:56.475248       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1207 21:22:56.483269       1 node.go:135] Successfully retrieved node IP: 192.168.61.171
	I1207 21:22:56.483292       1 server_others.go:149] Using iptables Proxier.
	I1207 21:22:56.483618       1 server.go:529] Version: v1.16.0
	I1207 21:22:56.490877       1 config.go:313] Starting service config controller
	I1207 21:22:56.495960       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1207 21:22:56.491069       1 config.go:131] Starting endpoints config controller
	I1207 21:22:56.496145       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1207 21:22:56.596334       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1207 21:22:56.596408       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55] <==
	* W1207 21:22:34.327913       1 authentication.go:79] Authentication is disabled
	I1207 21:22:34.327923       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1207 21:22:34.328613       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1207 21:22:34.387414       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:22:34.387942       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:22:34.388090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:22:34.388166       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:22:34.388270       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:22:34.388318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:22:34.388365       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:22:34.390333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 21:22:34.390581       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:34.391067       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:34.393750       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:22:35.389657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:22:35.390642       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:22:35.392175       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:22:35.393371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:22:35.394522       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:22:35.395284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:22:35.395852       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:22:35.398219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 21:22:35.398671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:35.400818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:22:35.400824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:16:40 UTC, ends at Thu 2023-12-07 21:33:08 UTC. --
	Dec 07 21:28:38 old-k8s-version-483745 kubelet[3163]: E1207 21:28:38.008382    3163 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:28:38 old-k8s-version-483745 kubelet[3163]: E1207 21:28:38.008474    3163 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:28:38 old-k8s-version-483745 kubelet[3163]: E1207 21:28:38.009304    3163 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:28:38 old-k8s-version-483745 kubelet[3163]: E1207 21:28:38.009389    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 07 21:28:51 old-k8s-version-483745 kubelet[3163]: E1207 21:28:50.999989    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:29:01 old-k8s-version-483745 kubelet[3163]: E1207 21:29:01.996789    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:29:15 old-k8s-version-483745 kubelet[3163]: E1207 21:29:15.996457    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:29:29 old-k8s-version-483745 kubelet[3163]: E1207 21:29:29.996374    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:29:40 old-k8s-version-483745 kubelet[3163]: E1207 21:29:40.996738    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:29:54 old-k8s-version-483745 kubelet[3163]: E1207 21:29:54.996170    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:30:08 old-k8s-version-483745 kubelet[3163]: E1207 21:30:08.996151    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:30:23 old-k8s-version-483745 kubelet[3163]: E1207 21:30:23.997406    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:30:36 old-k8s-version-483745 kubelet[3163]: E1207 21:30:36.996456    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:30:48 old-k8s-version-483745 kubelet[3163]: E1207 21:30:48.997397    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:01 old-k8s-version-483745 kubelet[3163]: E1207 21:31:01.996732    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:16 old-k8s-version-483745 kubelet[3163]: E1207 21:31:16.997012    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:30 old-k8s-version-483745 kubelet[3163]: E1207 21:31:30.996429    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:42 old-k8s-version-483745 kubelet[3163]: E1207 21:31:42.996496    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:53 old-k8s-version-483745 kubelet[3163]: E1207 21:31:53.996802    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:08 old-k8s-version-483745 kubelet[3163]: E1207 21:32:08.996697    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:22 old-k8s-version-483745 kubelet[3163]: E1207 21:32:22.996294    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:28 old-k8s-version-483745 kubelet[3163]: E1207 21:32:28.094403    3163 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 07 21:32:36 old-k8s-version-483745 kubelet[3163]: E1207 21:32:36.996258    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:49 old-k8s-version-483745 kubelet[3163]: E1207 21:32:49.997869    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:33:01 old-k8s-version-483745 kubelet[3163]: E1207 21:33:01.996882    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a] <==
	* I1207 21:22:57.736932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:22:57.747979       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:22:57.748162       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:22:57.757720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:22:57.758013       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977!
	I1207 21:22:57.763723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"600ed97f-c126-4743-b541-bd4ad57551d8", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977 became leader
	I1207 21:22:57.858514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-483745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tppp6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6: exit status 1 (67.930785ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tppp6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (421.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598346 -n embed-certs-598346
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:37:04.38595853 +0000 UTC m=+5748.554103505
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-598346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-598346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.987µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-598346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-598346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-598346 logs -n 25: (1.319540113s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:35 UTC | 07 Dec 23 21:35 UTC |
	| start   | -p newest-cni-155321 --memory=2200 --alsologtostderr   | newest-cni-155321            | jenkins | v1.32.0 | 07 Dec 23 21:35 UTC | 07 Dec 23 21:36 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:36 UTC | 07 Dec 23 21:36 UTC |
	| addons  | enable metrics-server -p newest-cni-155321             | newest-cni-155321            | jenkins | v1.32.0 | 07 Dec 23 21:36 UTC | 07 Dec 23 21:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p auto-715748 --memory=3072                           | auto-715748                  | jenkins | v1.32.0 | 07 Dec 23 21:36 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-155321                                   | newest-cni-155321            | jenkins | v1.32.0 | 07 Dec 23 21:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:36:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:36:18.490907   56906 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:36:18.491105   56906 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:36:18.491117   56906 out.go:309] Setting ErrFile to fd 2...
	I1207 21:36:18.491124   56906 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:36:18.491408   56906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:36:18.492206   56906 out.go:303] Setting JSON to false
	I1207 21:36:18.493488   56906 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8325,"bootTime":1701976654,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:36:18.493566   56906 start.go:138] virtualization: kvm guest
	I1207 21:36:18.495960   56906 out.go:177] * [auto-715748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:36:18.498218   56906 notify.go:220] Checking for updates...
	I1207 21:36:18.498219   56906 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:36:18.499689   56906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:36:18.501236   56906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:36:18.502634   56906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:36:18.504124   56906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:36:18.505794   56906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:36:18.507871   56906 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:36:18.507994   56906 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:36:18.508116   56906 config.go:182] Loaded profile config "newest-cni-155321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:36:18.508208   56906 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:36:18.545461   56906 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:36:18.546930   56906 start.go:298] selected driver: kvm2
	I1207 21:36:18.546948   56906 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:36:18.546959   56906 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:36:18.547762   56906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:36:18.547847   56906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:36:18.563928   56906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:36:18.563988   56906 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 21:36:18.564204   56906 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:36:18.564273   56906 cni.go:84] Creating CNI manager for ""
	I1207 21:36:18.564289   56906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:36:18.564309   56906 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 21:36:18.564323   56906 start_flags.go:323] config:
	{Name:auto-715748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:36:18.564501   56906 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:36:18.567364   56906 out.go:177] * Starting control plane node auto-715748 in cluster auto-715748
	I1207 21:36:18.568998   56906 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:36:18.569031   56906 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:36:18.569038   56906 cache.go:56] Caching tarball of preloaded images
	I1207 21:36:18.569131   56906 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:36:18.569143   56906 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:36:18.569229   56906 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/config.json ...
	I1207 21:36:18.569248   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/config.json: {Name:mkf2d357488ddc8ce9fd8fba5c01112d35dc788d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:18.569369   56906 start.go:365] acquiring machines lock for auto-715748: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:36:18.569395   56906 start.go:369] acquired machines lock for "auto-715748" in 14.843µs
	I1207 21:36:18.569407   56906 start.go:93] Provisioning new machine with config: &{Name:auto-715748 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:36:18.569460   56906 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 21:36:18.571502   56906 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 21:36:18.571666   56906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:36:18.571708   56906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:36:18.586610   56906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1207 21:36:18.587069   56906 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:36:18.587733   56906 main.go:141] libmachine: Using API Version  1
	I1207 21:36:18.587762   56906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:36:18.588117   56906 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:36:18.588284   56906 main.go:141] libmachine: (auto-715748) Calling .GetMachineName
	I1207 21:36:18.588456   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:18.588642   56906 start.go:159] libmachine.API.Create for "auto-715748" (driver="kvm2")
	I1207 21:36:18.588676   56906 client.go:168] LocalClient.Create starting
	I1207 21:36:18.588721   56906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:36:18.588769   56906 main.go:141] libmachine: Decoding PEM data...
	I1207 21:36:18.588794   56906 main.go:141] libmachine: Parsing certificate...
	I1207 21:36:18.588860   56906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:36:18.588897   56906 main.go:141] libmachine: Decoding PEM data...
	I1207 21:36:18.588917   56906 main.go:141] libmachine: Parsing certificate...
	I1207 21:36:18.588938   56906 main.go:141] libmachine: Running pre-create checks...
	I1207 21:36:18.588952   56906 main.go:141] libmachine: (auto-715748) Calling .PreCreateCheck
	I1207 21:36:18.589363   56906 main.go:141] libmachine: (auto-715748) Calling .GetConfigRaw
	I1207 21:36:18.589857   56906 main.go:141] libmachine: Creating machine...
	I1207 21:36:18.589877   56906 main.go:141] libmachine: (auto-715748) Calling .Create
	I1207 21:36:18.590052   56906 main.go:141] libmachine: (auto-715748) Creating KVM machine...
	I1207 21:36:18.591397   56906 main.go:141] libmachine: (auto-715748) DBG | found existing default KVM network
	I1207 21:36:18.592989   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:18.592805   56928 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:d9:97} reservation:<nil>}
	I1207 21:36:18.594200   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:18.594109   56928 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d4040}
	I1207 21:36:18.599685   56906 main.go:141] libmachine: (auto-715748) DBG | trying to create private KVM network mk-auto-715748 192.168.50.0/24...
	I1207 21:36:18.676103   56906 main.go:141] libmachine: (auto-715748) DBG | private KVM network mk-auto-715748 192.168.50.0/24 created
	I1207 21:36:18.676155   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:18.676028   56928 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:36:18.676170   56906 main.go:141] libmachine: (auto-715748) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748 ...
	I1207 21:36:18.676194   56906 main.go:141] libmachine: (auto-715748) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:36:18.676227   56906 main.go:141] libmachine: (auto-715748) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:36:18.899500   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:18.899361   56928 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa...
	I1207 21:36:19.115781   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:19.115646   56928 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/auto-715748.rawdisk...
	I1207 21:36:19.115819   56906 main.go:141] libmachine: (auto-715748) DBG | Writing magic tar header
	I1207 21:36:19.115836   56906 main.go:141] libmachine: (auto-715748) DBG | Writing SSH key tar header
	I1207 21:36:19.116395   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:19.116317   56928 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748 ...
	I1207 21:36:19.116459   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748
	I1207 21:36:19.116532   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748 (perms=drwx------)
	I1207 21:36:19.116579   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:36:19.116596   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:36:19.116616   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:36:19.116627   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:36:19.116638   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:36:19.116650   56906 main.go:141] libmachine: (auto-715748) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:36:19.116667   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:36:19.116680   56906 main.go:141] libmachine: (auto-715748) Creating domain...
	I1207 21:36:19.116697   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:36:19.116719   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:36:19.116733   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:36:19.116746   56906 main.go:141] libmachine: (auto-715748) DBG | Checking permissions on dir: /home
	I1207 21:36:19.116761   56906 main.go:141] libmachine: (auto-715748) DBG | Skipping /home - not owner
	I1207 21:36:19.117854   56906 main.go:141] libmachine: (auto-715748) define libvirt domain using xml: 
	I1207 21:36:19.117878   56906 main.go:141] libmachine: (auto-715748) <domain type='kvm'>
	I1207 21:36:19.117886   56906 main.go:141] libmachine: (auto-715748)   <name>auto-715748</name>
	I1207 21:36:19.117892   56906 main.go:141] libmachine: (auto-715748)   <memory unit='MiB'>3072</memory>
	I1207 21:36:19.117898   56906 main.go:141] libmachine: (auto-715748)   <vcpu>2</vcpu>
	I1207 21:36:19.117903   56906 main.go:141] libmachine: (auto-715748)   <features>
	I1207 21:36:19.117909   56906 main.go:141] libmachine: (auto-715748)     <acpi/>
	I1207 21:36:19.117937   56906 main.go:141] libmachine: (auto-715748)     <apic/>
	I1207 21:36:19.117951   56906 main.go:141] libmachine: (auto-715748)     <pae/>
	I1207 21:36:19.117967   56906 main.go:141] libmachine: (auto-715748)     
	I1207 21:36:19.117980   56906 main.go:141] libmachine: (auto-715748)   </features>
	I1207 21:36:19.118070   56906 main.go:141] libmachine: (auto-715748)   <cpu mode='host-passthrough'>
	I1207 21:36:19.118093   56906 main.go:141] libmachine: (auto-715748)   
	I1207 21:36:19.118100   56906 main.go:141] libmachine: (auto-715748)   </cpu>
	I1207 21:36:19.118115   56906 main.go:141] libmachine: (auto-715748)   <os>
	I1207 21:36:19.118123   56906 main.go:141] libmachine: (auto-715748)     <type>hvm</type>
	I1207 21:36:19.118129   56906 main.go:141] libmachine: (auto-715748)     <boot dev='cdrom'/>
	I1207 21:36:19.118140   56906 main.go:141] libmachine: (auto-715748)     <boot dev='hd'/>
	I1207 21:36:19.118168   56906 main.go:141] libmachine: (auto-715748)     <bootmenu enable='no'/>
	I1207 21:36:19.118206   56906 main.go:141] libmachine: (auto-715748)   </os>
	I1207 21:36:19.118217   56906 main.go:141] libmachine: (auto-715748)   <devices>
	I1207 21:36:19.118225   56906 main.go:141] libmachine: (auto-715748)     <disk type='file' device='cdrom'>
	I1207 21:36:19.118249   56906 main.go:141] libmachine: (auto-715748)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/boot2docker.iso'/>
	I1207 21:36:19.118258   56906 main.go:141] libmachine: (auto-715748)       <target dev='hdc' bus='scsi'/>
	I1207 21:36:19.118269   56906 main.go:141] libmachine: (auto-715748)       <readonly/>
	I1207 21:36:19.118278   56906 main.go:141] libmachine: (auto-715748)     </disk>
	I1207 21:36:19.118289   56906 main.go:141] libmachine: (auto-715748)     <disk type='file' device='disk'>
	I1207 21:36:19.118305   56906 main.go:141] libmachine: (auto-715748)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:36:19.118320   56906 main.go:141] libmachine: (auto-715748)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/auto-715748.rawdisk'/>
	I1207 21:36:19.118329   56906 main.go:141] libmachine: (auto-715748)       <target dev='hda' bus='virtio'/>
	I1207 21:36:19.118339   56906 main.go:141] libmachine: (auto-715748)     </disk>
	I1207 21:36:19.118351   56906 main.go:141] libmachine: (auto-715748)     <interface type='network'>
	I1207 21:36:19.118379   56906 main.go:141] libmachine: (auto-715748)       <source network='mk-auto-715748'/>
	I1207 21:36:19.118397   56906 main.go:141] libmachine: (auto-715748)       <model type='virtio'/>
	I1207 21:36:19.118418   56906 main.go:141] libmachine: (auto-715748)     </interface>
	I1207 21:36:19.118430   56906 main.go:141] libmachine: (auto-715748)     <interface type='network'>
	I1207 21:36:19.118443   56906 main.go:141] libmachine: (auto-715748)       <source network='default'/>
	I1207 21:36:19.118455   56906 main.go:141] libmachine: (auto-715748)       <model type='virtio'/>
	I1207 21:36:19.118469   56906 main.go:141] libmachine: (auto-715748)     </interface>
	I1207 21:36:19.118482   56906 main.go:141] libmachine: (auto-715748)     <serial type='pty'>
	I1207 21:36:19.118496   56906 main.go:141] libmachine: (auto-715748)       <target port='0'/>
	I1207 21:36:19.118513   56906 main.go:141] libmachine: (auto-715748)     </serial>
	I1207 21:36:19.118527   56906 main.go:141] libmachine: (auto-715748)     <console type='pty'>
	I1207 21:36:19.118538   56906 main.go:141] libmachine: (auto-715748)       <target type='serial' port='0'/>
	I1207 21:36:19.118551   56906 main.go:141] libmachine: (auto-715748)     </console>
	I1207 21:36:19.118573   56906 main.go:141] libmachine: (auto-715748)     <rng model='virtio'>
	I1207 21:36:19.118594   56906 main.go:141] libmachine: (auto-715748)       <backend model='random'>/dev/random</backend>
	I1207 21:36:19.118616   56906 main.go:141] libmachine: (auto-715748)     </rng>
	I1207 21:36:19.118626   56906 main.go:141] libmachine: (auto-715748)     
	I1207 21:36:19.118638   56906 main.go:141] libmachine: (auto-715748)     
	I1207 21:36:19.118652   56906 main.go:141] libmachine: (auto-715748)   </devices>
	I1207 21:36:19.118661   56906 main.go:141] libmachine: (auto-715748) </domain>
	I1207 21:36:19.118673   56906 main.go:141] libmachine: (auto-715748) 
	I1207 21:36:19.122798   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:31:f8:e3 in network default
	I1207 21:36:19.123353   56906 main.go:141] libmachine: (auto-715748) Ensuring networks are active...
	I1207 21:36:19.123372   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:19.124180   56906 main.go:141] libmachine: (auto-715748) Ensuring network default is active
	I1207 21:36:19.124637   56906 main.go:141] libmachine: (auto-715748) Ensuring network mk-auto-715748 is active
	I1207 21:36:19.125343   56906 main.go:141] libmachine: (auto-715748) Getting domain xml...
	I1207 21:36:19.126216   56906 main.go:141] libmachine: (auto-715748) Creating domain...
	I1207 21:36:20.481251   56906 main.go:141] libmachine: (auto-715748) Waiting to get IP...
	I1207 21:36:20.482208   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:20.482667   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:20.482717   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:20.482662   56928 retry.go:31] will retry after 277.584029ms: waiting for machine to come up
	I1207 21:36:20.762302   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:20.762815   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:20.762845   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:20.762768   56928 retry.go:31] will retry after 360.673521ms: waiting for machine to come up
	I1207 21:36:21.125204   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:21.125714   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:21.125737   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:21.125670   56928 retry.go:31] will retry after 461.277459ms: waiting for machine to come up
	I1207 21:36:21.588113   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:21.588516   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:21.588545   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:21.588458   56928 retry.go:31] will retry after 516.938407ms: waiting for machine to come up
	I1207 21:36:22.107270   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:22.107733   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:22.107757   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:22.107688   56928 retry.go:31] will retry after 571.819676ms: waiting for machine to come up
	I1207 21:36:22.681395   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:22.681794   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:22.681825   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:22.681749   56928 retry.go:31] will retry after 892.516695ms: waiting for machine to come up
	I1207 21:36:23.576189   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:23.576679   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:23.576702   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:23.576621   56928 retry.go:31] will retry after 985.154704ms: waiting for machine to come up
	I1207 21:36:24.562852   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:24.563356   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:24.563385   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:24.563297   56928 retry.go:31] will retry after 934.436328ms: waiting for machine to come up
	I1207 21:36:25.498922   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:25.499343   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:25.499366   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:25.499312   56928 retry.go:31] will retry after 1.292341725s: waiting for machine to come up
	I1207 21:36:26.792682   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:26.793144   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:26.793172   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:26.793107   56928 retry.go:31] will retry after 1.454521647s: waiting for machine to come up
	I1207 21:36:28.248564   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:28.249055   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:28.249081   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:28.249030   56928 retry.go:31] will retry after 2.467007994s: waiting for machine to come up
	I1207 21:36:30.718405   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:30.718848   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:30.718878   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:30.718811   56928 retry.go:31] will retry after 2.410166181s: waiting for machine to come up
	I1207 21:36:33.132269   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:33.132727   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:33.132749   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:33.132676   56928 retry.go:31] will retry after 3.898606536s: waiting for machine to come up
	I1207 21:36:37.032621   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:37.033054   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find current IP address of domain auto-715748 in network mk-auto-715748
	I1207 21:36:37.033076   56906 main.go:141] libmachine: (auto-715748) DBG | I1207 21:36:37.033035   56928 retry.go:31] will retry after 5.610448721s: waiting for machine to come up
	I1207 21:36:42.647716   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:42.648183   56906 main.go:141] libmachine: (auto-715748) Found IP for machine: 192.168.50.78
	I1207 21:36:42.648210   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has current primary IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:42.648234   56906 main.go:141] libmachine: (auto-715748) Reserving static IP address...
	I1207 21:36:42.648505   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find host DHCP lease matching {name: "auto-715748", mac: "52:54:00:a2:82:ab", ip: "192.168.50.78"} in network mk-auto-715748
	I1207 21:36:42.723020   56906 main.go:141] libmachine: (auto-715748) DBG | Getting to WaitForSSH function...
	I1207 21:36:42.723052   56906 main.go:141] libmachine: (auto-715748) Reserved static IP address: 192.168.50.78
	I1207 21:36:42.723064   56906 main.go:141] libmachine: (auto-715748) Waiting for SSH to be available...
	I1207 21:36:42.725672   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:42.725983   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748
	I1207 21:36:42.726008   56906 main.go:141] libmachine: (auto-715748) DBG | unable to find defined IP address of network mk-auto-715748 interface with MAC address 52:54:00:a2:82:ab
	I1207 21:36:42.726165   56906 main.go:141] libmachine: (auto-715748) DBG | Using SSH client type: external
	I1207 21:36:42.726188   56906 main.go:141] libmachine: (auto-715748) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa (-rw-------)
	I1207 21:36:42.726235   56906 main.go:141] libmachine: (auto-715748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:36:42.726247   56906 main.go:141] libmachine: (auto-715748) DBG | About to run SSH command:
	I1207 21:36:42.726278   56906 main.go:141] libmachine: (auto-715748) DBG | exit 0
	I1207 21:36:42.729698   56906 main.go:141] libmachine: (auto-715748) DBG | SSH cmd err, output: exit status 255: 
	I1207 21:36:42.729719   56906 main.go:141] libmachine: (auto-715748) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1207 21:36:42.729727   56906 main.go:141] libmachine: (auto-715748) DBG | command : exit 0
	I1207 21:36:42.729733   56906 main.go:141] libmachine: (auto-715748) DBG | err     : exit status 255
	I1207 21:36:42.729749   56906 main.go:141] libmachine: (auto-715748) DBG | output  : 
	I1207 21:36:45.731283   56906 main.go:141] libmachine: (auto-715748) DBG | Getting to WaitForSSH function...
	I1207 21:36:45.733780   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.734242   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:45.734269   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.734349   56906 main.go:141] libmachine: (auto-715748) DBG | Using SSH client type: external
	I1207 21:36:45.734381   56906 main.go:141] libmachine: (auto-715748) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa (-rw-------)
	I1207 21:36:45.734414   56906 main.go:141] libmachine: (auto-715748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:36:45.734426   56906 main.go:141] libmachine: (auto-715748) DBG | About to run SSH command:
	I1207 21:36:45.734438   56906 main.go:141] libmachine: (auto-715748) DBG | exit 0
	I1207 21:36:45.825595   56906 main.go:141] libmachine: (auto-715748) DBG | SSH cmd err, output: <nil>: 
	I1207 21:36:45.825892   56906 main.go:141] libmachine: (auto-715748) KVM machine creation complete!
	I1207 21:36:45.826202   56906 main.go:141] libmachine: (auto-715748) Calling .GetConfigRaw
	I1207 21:36:45.826732   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:45.826943   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:45.827088   56906 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 21:36:45.827103   56906 main.go:141] libmachine: (auto-715748) Calling .GetState
	I1207 21:36:45.828550   56906 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 21:36:45.828565   56906 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 21:36:45.828571   56906 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 21:36:45.828578   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:45.830619   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.831077   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:45.831103   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.831475   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:45.831733   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:45.831917   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:45.832069   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:45.832256   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:45.832682   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:45.832699   56906 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 21:36:45.944967   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:36:45.944993   56906 main.go:141] libmachine: Detecting the provisioner...
	I1207 21:36:45.945003   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:45.947883   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.948330   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:45.948359   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:45.948512   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:45.948707   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:45.948886   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:45.949006   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:45.949190   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:45.949518   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:45.949533   56906 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 21:36:46.062796   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 21:36:46.062853   56906 main.go:141] libmachine: found compatible host: buildroot
	I1207 21:36:46.062862   56906 main.go:141] libmachine: Provisioning with buildroot...
	I1207 21:36:46.062871   56906 main.go:141] libmachine: (auto-715748) Calling .GetMachineName
	I1207 21:36:46.063143   56906 buildroot.go:166] provisioning hostname "auto-715748"
	I1207 21:36:46.063182   56906 main.go:141] libmachine: (auto-715748) Calling .GetMachineName
	I1207 21:36:46.063368   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.065950   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.066364   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.066391   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.066561   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:46.066722   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.066877   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.067011   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:46.067175   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:46.067624   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:46.067642   56906 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-715748 && echo "auto-715748" | sudo tee /etc/hostname
	I1207 21:36:46.189960   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-715748
	
	I1207 21:36:46.189985   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.192735   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.193084   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.193112   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.193282   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:46.193472   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.193603   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.193715   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:46.193836   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:46.194212   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:46.194231   56906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-715748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-715748/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-715748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:36:46.309992   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:36:46.310021   56906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:36:46.310042   56906 buildroot.go:174] setting up certificates
	I1207 21:36:46.310053   56906 provision.go:83] configureAuth start
	I1207 21:36:46.310066   56906 main.go:141] libmachine: (auto-715748) Calling .GetMachineName
	I1207 21:36:46.310347   56906 main.go:141] libmachine: (auto-715748) Calling .GetIP
	I1207 21:36:46.312851   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.313261   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.313288   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.313448   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.315528   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.315838   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.315863   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.315999   56906 provision.go:138] copyHostCerts
	I1207 21:36:46.316051   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:36:46.316060   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:36:46.316126   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:36:46.316216   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:36:46.316227   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:36:46.316251   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:36:46.316313   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:36:46.316322   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:36:46.316342   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:36:46.316382   56906 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.auto-715748 san=[192.168.50.78 192.168.50.78 localhost 127.0.0.1 minikube auto-715748]
	I1207 21:36:46.410405   56906 provision.go:172] copyRemoteCerts
	I1207 21:36:46.410464   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:36:46.410484   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.412969   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.413269   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.413296   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.413503   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:46.413703   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.413859   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:46.414023   56906 sshutil.go:53] new ssh client: &{IP:192.168.50.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa Username:docker}
	I1207 21:36:46.498601   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:36:46.525834   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:36:46.550152   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1207 21:36:46.574573   56906 provision.go:86] duration metric: configureAuth took 264.509824ms
	I1207 21:36:46.574593   56906 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:36:46.574775   56906 config.go:182] Loaded profile config "auto-715748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:36:46.574861   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.577499   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.577893   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.577949   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.578144   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:46.578325   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.578504   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.578660   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:46.578827   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:46.579190   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:46.579212   56906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:36:46.885662   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:36:46.885693   56906 main.go:141] libmachine: Checking connection to Docker...
	I1207 21:36:46.885704   56906 main.go:141] libmachine: (auto-715748) Calling .GetURL
	I1207 21:36:46.887080   56906 main.go:141] libmachine: (auto-715748) DBG | Using libvirt version 6000000
	I1207 21:36:46.889324   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.889618   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.889639   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.889830   56906 main.go:141] libmachine: Docker is up and running!
	I1207 21:36:46.889845   56906 main.go:141] libmachine: Reticulating splines...
	I1207 21:36:46.889851   56906 client.go:171] LocalClient.Create took 28.301165009s
	I1207 21:36:46.889870   56906 start.go:167] duration metric: libmachine.API.Create for "auto-715748" took 28.301232396s
	I1207 21:36:46.889878   56906 start.go:300] post-start starting for "auto-715748" (driver="kvm2")
	I1207 21:36:46.889894   56906 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:36:46.889908   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:46.890144   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:36:46.890178   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:46.892060   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.892359   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:46.892388   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:46.892503   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:46.892666   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:46.892889   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:46.893075   56906 sshutil.go:53] new ssh client: &{IP:192.168.50.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa Username:docker}
	I1207 21:36:46.978511   56906 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:36:46.982587   56906 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:36:46.982608   56906 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:36:46.982670   56906 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:36:46.982748   56906 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:36:46.982846   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:36:46.990639   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:36:47.012853   56906 start.go:303] post-start completed in 122.963574ms
	I1207 21:36:47.012895   56906 main.go:141] libmachine: (auto-715748) Calling .GetConfigRaw
	I1207 21:36:47.013437   56906 main.go:141] libmachine: (auto-715748) Calling .GetIP
	I1207 21:36:47.015761   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.016071   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:47.016101   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.016290   56906 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/config.json ...
	I1207 21:36:47.016445   56906 start.go:128] duration metric: createHost completed in 28.446976426s
	I1207 21:36:47.016468   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:47.018621   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.018895   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:47.018918   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.019015   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:47.019186   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:47.019300   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:47.019431   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:47.019575   56906 main.go:141] libmachine: Using SSH client type: native
	I1207 21:36:47.019948   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.78 22 <nil> <nil>}
	I1207 21:36:47.019960   56906 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:36:47.130717   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701985007.117791304
	
	I1207 21:36:47.130746   56906 fix.go:206] guest clock: 1701985007.117791304
	I1207 21:36:47.130757   56906 fix.go:219] Guest: 2023-12-07 21:36:47.117791304 +0000 UTC Remote: 2023-12-07 21:36:47.016455657 +0000 UTC m=+28.583388091 (delta=101.335647ms)
	I1207 21:36:47.130782   56906 fix.go:190] guest clock delta is within tolerance: 101.335647ms
	I1207 21:36:47.130789   56906 start.go:83] releasing machines lock for "auto-715748", held for 28.561387322s
	I1207 21:36:47.130816   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:47.131078   56906 main.go:141] libmachine: (auto-715748) Calling .GetIP
	I1207 21:36:47.134014   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.134386   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:47.134424   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.134691   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:47.135191   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:47.135382   56906 main.go:141] libmachine: (auto-715748) Calling .DriverName
	I1207 21:36:47.135493   56906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:36:47.135554   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:47.135798   56906 ssh_runner.go:195] Run: cat /version.json
	I1207 21:36:47.135856   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHHostname
	I1207 21:36:47.138652   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.138796   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.139022   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:47.139048   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.139123   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:47.139157   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:47.139186   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:47.139396   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHPort
	I1207 21:36:47.139401   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:47.139568   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHKeyPath
	I1207 21:36:47.139580   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:47.139746   56906 main.go:141] libmachine: (auto-715748) Calling .GetSSHUsername
	I1207 21:36:47.139754   56906 sshutil.go:53] new ssh client: &{IP:192.168.50.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa Username:docker}
	I1207 21:36:47.139929   56906 sshutil.go:53] new ssh client: &{IP:192.168.50.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/auto-715748/id_rsa Username:docker}
	I1207 21:36:47.240782   56906 ssh_runner.go:195] Run: systemctl --version
	I1207 21:36:47.246098   56906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:36:47.401648   56906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:36:47.408037   56906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:36:47.408105   56906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:36:47.422255   56906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:36:47.422276   56906 start.go:475] detecting cgroup driver to use...
	I1207 21:36:47.422333   56906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:36:47.441978   56906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:36:47.454975   56906 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:36:47.455033   56906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:36:47.468470   56906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:36:47.482091   56906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:36:47.603704   56906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:36:47.731124   56906 docker.go:219] disabling docker service ...
	I1207 21:36:47.731187   56906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:36:47.744627   56906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:36:47.758377   56906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:36:47.886243   56906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:36:48.014045   56906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:36:48.027350   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:36:48.044530   56906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:36:48.044609   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:36:48.054893   56906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:36:48.054949   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:36:48.064843   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:36:48.075981   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:36:48.086109   56906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:36:48.096674   56906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:36:48.105799   56906 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:36:48.105849   56906 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:36:48.119821   56906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:36:48.128731   56906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:36:48.248317   56906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:36:48.426439   56906 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:36:48.426512   56906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:36:48.435382   56906 start.go:543] Will wait 60s for crictl version
	I1207 21:36:48.435470   56906 ssh_runner.go:195] Run: which crictl
	I1207 21:36:48.439390   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:36:48.481943   56906 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:36:48.482028   56906 ssh_runner.go:195] Run: crio --version
	I1207 21:36:48.530395   56906 ssh_runner.go:195] Run: crio --version
	I1207 21:36:48.582798   56906 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:36:48.584227   56906 main.go:141] libmachine: (auto-715748) Calling .GetIP
	I1207 21:36:48.586607   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:48.586965   56906 main.go:141] libmachine: (auto-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:82:ab", ip: ""} in network mk-auto-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:36:35 +0000 UTC Type:0 Mac:52:54:00:a2:82:ab Iaid: IPaddr:192.168.50.78 Prefix:24 Hostname:auto-715748 Clientid:01:52:54:00:a2:82:ab}
	I1207 21:36:48.586999   56906 main.go:141] libmachine: (auto-715748) DBG | domain auto-715748 has defined IP address 192.168.50.78 and MAC address 52:54:00:a2:82:ab in network mk-auto-715748
	I1207 21:36:48.587201   56906 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:36:48.591262   56906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:36:48.603622   56906 localpath.go:92] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.crt -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt
	I1207 21:36:48.603772   56906 localpath.go:117] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.key -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.key
	I1207 21:36:48.603904   56906 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:36:48.603977   56906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:36:48.637258   56906 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:36:48.637324   56906 ssh_runner.go:195] Run: which lz4
	I1207 21:36:48.641586   56906 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:36:48.645727   56906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:36:48.645751   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:36:50.397169   56906 crio.go:444] Took 1.755628 seconds to copy over tarball
	I1207 21:36:50.397257   56906 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:36:53.726161   56906 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.328880694s)
	I1207 21:36:53.726207   56906 crio.go:451] Took 3.329016 seconds to extract the tarball
	I1207 21:36:53.726219   56906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:36:53.777393   56906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:36:53.857107   56906 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:36:53.857131   56906 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:36:53.857201   56906 ssh_runner.go:195] Run: crio config
	I1207 21:36:53.929858   56906 cni.go:84] Creating CNI manager for ""
	I1207 21:36:53.929878   56906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:36:53.929896   56906 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:36:53.929912   56906 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-715748 NodeName:auto-715748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:36:53.930069   56906 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-715748"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:36:53.930138   56906 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-715748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:36:53.930185   56906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:36:53.939780   56906 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:36:53.939853   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:36:53.948181   56906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I1207 21:36:53.964614   56906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:36:53.982036   56906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I1207 21:36:53.999768   56906 ssh_runner.go:195] Run: grep 192.168.50.78	control-plane.minikube.internal$ /etc/hosts
	I1207 21:36:54.003697   56906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:36:54.017456   56906 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748 for IP: 192.168.50.78
	I1207 21:36:54.017492   56906 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:54.017637   56906 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:36:54.017680   56906 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:36:54.017769   56906 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.key
	I1207 21:36:54.017791   56906 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key.f6d63375
	I1207 21:36:54.017806   56906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt.f6d63375 with IP's: [192.168.50.78 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 21:36:54.228083   56906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt.f6d63375 ...
	I1207 21:36:54.228114   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt.f6d63375: {Name:mk3f5b15c9614049792c6d44537f5dd73590cb13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:54.228270   56906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key.f6d63375 ...
	I1207 21:36:54.228283   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key.f6d63375: {Name:mk3978311a5770f571dcaee2b5afde1210cf78c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:54.228352   56906 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt.f6d63375 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt
	I1207 21:36:54.228410   56906 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key.f6d63375 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key
	I1207 21:36:54.228457   56906 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.key
	I1207 21:36:54.228469   56906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.crt with IP's: []
	I1207 21:36:54.591789   56906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.crt ...
	I1207 21:36:54.591817   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.crt: {Name:mk90876469b49c246bb5e59ca3fd75a1ec61c15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:54.591976   56906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.key ...
	I1207 21:36:54.591986   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.key: {Name:mk6d3e698aa7210a5ed9b99e9d8fe4147b6c13cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:36:54.592172   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:36:54.592211   56906 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:36:54.592222   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:36:54.592254   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:36:54.592292   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:36:54.592315   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:36:54.592360   56906 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:36:54.592936   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:36:54.617394   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:36:54.640855   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:36:54.665265   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:36:54.689207   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:36:54.712435   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:36:54.736446   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:36:54.762463   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:36:54.786877   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:36:54.811942   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:36:54.838621   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:36:54.864185   56906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:36:54.884595   56906 ssh_runner.go:195] Run: openssl version
	I1207 21:36:54.890781   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:36:54.901037   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:36:54.905542   56906 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:36:54.905613   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:36:54.911656   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:36:54.921563   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:36:54.931226   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:36:54.936066   56906 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:36:54.936140   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:36:54.941869   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:36:54.952390   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:36:54.962658   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:36:54.967709   56906 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:36:54.967777   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:36:54.973509   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:36:54.982876   56906 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:36:54.987197   56906 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 21:36:54.987253   56906 kubeadm.go:404] StartCluster: {Name:auto-715748 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:auto-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.78 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:36:54.987332   56906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:36:54.987406   56906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:36:55.031562   56906 cri.go:89] found id: ""
	I1207 21:36:55.031634   56906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:36:55.040778   56906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:36:55.049472   56906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:36:55.058087   56906 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:36:55.058130   56906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:36:55.116839   56906 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:36:55.116957   56906 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:36:55.256733   56906 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:36:55.256874   56906 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:36:55.256974   56906 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:36:55.474213   56906 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:36:55.476213   56906 out.go:204]   - Generating certificates and keys ...
	I1207 21:36:55.476333   56906 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:36:55.476413   56906 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:36:55.658484   56906 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 21:36:55.766559   56906 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 21:36:55.919377   56906 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 21:36:56.077952   56906 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 21:36:56.266434   56906 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 21:36:56.266776   56906 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-715748 localhost] and IPs [192.168.50.78 127.0.0.1 ::1]
	I1207 21:36:56.407152   56906 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 21:36:56.407305   56906 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-715748 localhost] and IPs [192.168.50.78 127.0.0.1 ::1]
	I1207 21:36:56.530418   56906 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 21:36:56.642098   56906 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 21:36:56.825809   56906 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 21:36:56.826043   56906 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:36:56.963151   56906 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:36:57.131043   56906 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:36:57.354137   56906 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:36:57.558625   56906 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:36:57.559229   56906 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:36:57.561819   56906 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:36:57.563811   56906 out.go:204]   - Booting up control plane ...
	I1207 21:36:57.563958   56906 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:36:57.564070   56906 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:36:57.564181   56906 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:36:57.584609   56906 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:36:57.585507   56906 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:36:57.585602   56906 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:36:57.718788   56906 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:15:32 UTC, ends at Thu 2023-12-07 21:37:05 UTC. --
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.140735257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985025140722182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=54f852a2-30c5-4d28-8462-515bb503bd34 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.141851531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81c3ec3a-fdb3-4189-b084-d62943df1963 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.141979008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81c3ec3a-fdb3-4189-b084-d62943df1963 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.142129950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81c3ec3a-fdb3-4189-b084-d62943df1963 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.184173425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=616c0ff3-fd8c-4715-ae28-aa258e557a67 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.184267763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=616c0ff3-fd8c-4715-ae28-aa258e557a67 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.185666570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3272e6e4-e7d3-46c3-80fa-f1b5b49e81e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.186157472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985025186139485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3272e6e4-e7d3-46c3-80fa-f1b5b49e81e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.186700942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6e35a1ee-a55a-478f-a165-8f81e5c61456 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.186790594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6e35a1ee-a55a-478f-a165-8f81e5c61456 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.187037762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6e35a1ee-a55a-478f-a165-8f81e5c61456 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.229987565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d1862cad-7a83-4a8a-acd7-519e9f88a5bf name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.230104968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d1862cad-7a83-4a8a-acd7-519e9f88a5bf name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.231283385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7c36699d-6968-4c27-993e-e38e6bc8019d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.231982496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985025231869165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7c36699d-6968-4c27-993e-e38e6bc8019d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.233044373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=454dbef3-e4ef-4774-ba09-a696b3b381da name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.233142285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=454dbef3-e4ef-4774-ba09-a696b3b381da name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.233460421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=454dbef3-e4ef-4774-ba09-a696b3b381da name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.276181278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f07f9d70-6757-44a0-864d-6768555ae1d5 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.276295942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f07f9d70-6757-44a0-864d-6768555ae1d5 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.278720078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59178514-86b4-4aff-8aba-5bac4f5ac69d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.279425012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985025279402339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=59178514-86b4-4aff-8aba-5bac4f5ac69d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.280210788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8b0e1d8-7172-41be-a21b-74fb62baafc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.280273657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8b0e1d8-7172-41be-a21b-74fb62baafc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:37:05 embed-certs-598346 crio[714]: time="2023-12-07 21:37:05.280509288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7,PodSandboxId:71157e5ee49d22315c42b38ece28572dd10ea5aae17a4f5c40cde624172435f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984059926992953,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14,},Annotations:map[string]string{io.kubernetes.container.hash: 89a041ad,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae,PodSandboxId:7ae1c8c92da2d1d7912d176fcd207453e8918abd4bb896bf97603df1bd7d86b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701984058368483459,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h4pmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3cc315-efaf-47b9-86e3-851cc930461b,},Annotations:map[string]string{io.kubernetes.container.hash: 70f362f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59,PodSandboxId:b34bf4aa578ece3e829560ca325f38a2417209b19c7370b3a2affbed66762bfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701984057169611507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nllk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89c53a27-fa3e-40e9-b180-1bb6ae5c7b62,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe6f40c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99,PodSandboxId:31060c233cb8658454b6f8f9d659e14b51a4994447ea00ac2c70a860f616993f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701984036059091180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: bf95ab796fecf05f0e74a5a77549e004,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b,PodSandboxId:487e5cee31477f6068dc67ce06c2b3c639e440650261c6df8a1a0131f0ee39be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701984035831987046,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557f25590e782dbdd3c0d081d2d91cf1,},Annotations:
map[string]string{io.kubernetes.container.hash: 85891a44,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458,PodSandboxId:e8940552f6ea0fb01ac6b3d337bfe6519629ec0c6ab3f47e93cdc549f015c10f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701984035377278788,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a487adf7114a53a4bb89
ae3f412bd87,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057,PodSandboxId:800c608688cf7b32339ff05dff030d73e3028a125eadd1d37914a6216a6c16c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701984035191385748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-598346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74569b3ec3f3376a1fb2afd7e14df1
1,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8b0e1d8-7172-41be-a21b-74fb62baafc1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55d67718482d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   71157e5ee49d2       storage-provisioner
	79a33dbfd0cb2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   7ae1c8c92da2d       kube-proxy-h4pmv
	0bd79a03ef1e5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   b34bf4aa578ec       coredns-5dd5756b68-nllk7
	b9e322f9a929a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   31060c233cb86       kube-scheduler-embed-certs-598346
	a89e2a11dada3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   487e5cee31477       etcd-embed-certs-598346
	ee851783f696a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   e8940552f6ea0       kube-controller-manager-embed-certs-598346
	ac812ba232eb7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   800c608688cf7       kube-apiserver-embed-certs-598346
	
	* 
	* ==> coredns [0bd79a03ef1e58abdb0f13478da45c2551657e49455d2b8e2adbbcb6becd6c59] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47206 - 45525 "HINFO IN 4812590669896354982.6400754222289715007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014442929s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-598346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-598346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=embed-certs-598346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:20:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-598346
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:37:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:36:21 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:36:21 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:36:21 +0000   Thu, 07 Dec 2023 21:20:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:36:21 +0000   Thu, 07 Dec 2023 21:20:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    embed-certs-598346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c4331d3ecf844d2a32645f7c532352b
	  System UUID:                1c4331d3-ecf8-44d2-a326-45f7c532352b
	  Boot ID:                    06bf7769-9b17-4760-b917-c8bbfc301f7f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nllk7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-598346                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-598346             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-598346    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-h4pmv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-598346             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-pstg2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-598346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-598346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-598346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-598346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-598346 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node embed-certs-598346 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node embed-certs-598346 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-598346 event: Registered Node embed-certs-598346 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066662] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.343275] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.391305] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150890] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.624765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.967299] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.112429] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.155791] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.107350] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.214278] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.300988] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Dec 7 21:16] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 7 21:20] systemd-fstab-generator[3536]: Ignoring "noauto" for root device
	[  +9.308056] systemd-fstab-generator[3863]: Ignoring "noauto" for root device
	[ +13.357384] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [a89e2a11dada38a99b94b0e571ef5ff2cd3e0d8dba7a7bc08f2a267048bf099b] <==
	* {"level":"info","ts":"2023-12-07T21:20:37.550507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T21:20:37.550621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 1"}
	{"level":"info","ts":"2023-12-07T21:20:37.550656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.550734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.55082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.550849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-07T21:20:37.553807Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.554557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:20:37.555683Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	{"level":"info","ts":"2023-12-07T21:20:37.556051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:20:37.556858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:20:37.559268Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.560118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.560208Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:20:37.559502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:20:37.560397Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:20:37.554497Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:embed-certs-598346 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:30:38.134202Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2023-12-07T21:30:38.136954Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":720,"took":"2.343218ms","hash":223911950}
	{"level":"info","ts":"2023-12-07T21:30:38.137014Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":223911950,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2023-12-07T21:35:38.141621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2023-12-07T21:35:38.143962Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":963,"took":"1.647625ms","hash":1756245512}
	{"level":"info","ts":"2023-12-07T21:35:38.14406Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1756245512,"revision":963,"compact-revision":720}
	{"level":"info","ts":"2023-12-07T21:35:49.462557Z","caller":"traceutil/trace.go:171","msg":"trace[2010258614] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"125.373409ms","start":"2023-12-07T21:35:49.337134Z","end":"2023-12-07T21:35:49.462508Z","steps":["trace[2010258614] 'process raft request'  (duration: 125.262909ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:36:52.437379Z","caller":"traceutil/trace.go:171","msg":"trace[1382922765] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"112.285309ms","start":"2023-12-07T21:36:52.325035Z","end":"2023-12-07T21:36:52.43732Z","steps":["trace[1382922765] 'process raft request'  (duration: 112.119395ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:37:05 up 21 min,  0 users,  load average: 1.20, 0.45, 0.26
	Linux embed-certs-598346 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ac812ba232eb7a81be1ff8566eb7f1058ed1c55c8dd708182faa198d3f19f057] <==
	* W1207 21:33:40.912217       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:33:40.912283       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:33:40.912318       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:34:39.733403       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1207 21:35:39.732687       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:35:39.914601       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:35:39.914707       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:35:39.915096       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:35:40.914940       1 handler_proxy.go:93] no RequestInfo found in the context
	W1207 21:35:40.914979       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:35:40.915176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:35:40.915184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1207 21:35:40.915215       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:35:40.917269       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:36:39.732400       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:36:40.915824       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:36:40.915958       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:36:40.915969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:36:40.918403       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:36:40.918474       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:36:40.918482       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [ee851783f696a899540cc4d7612b26aa3902587cd2c8bf254e4737de2ed45458] <==
	* I1207 21:31:25.471234       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:31:54.967500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:31:55.480054       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:32:07.443485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="128.204µs"
	I1207 21:32:22.437357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="88.715µs"
	E1207 21:32:24.973517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:32:25.489188       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:32:54.981144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:32:55.498848       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:33:24.987980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:25.508484       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:33:54.994275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:55.518861       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:34:25.001528       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:25.528678       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:34:55.011679       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:55.538438       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:25.018240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:25.549112       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:55.024843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:55.561299       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:36:25.031323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:36:25.569791       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:36:55.039637       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:36:55.583745       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [79a33dbfd0cb2ec9d98b7c040441bd146c8c8fe27914e3f1137151910d6a0dae] <==
	* I1207 21:20:59.149621       1 server_others.go:69] "Using iptables proxy"
	I1207 21:20:59.266730       1 node.go:141] Successfully retrieved node IP: 192.168.72.180
	I1207 21:20:59.593349       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:20:59.593408       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:20:59.664052       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:20:59.666364       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:20:59.666564       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:20:59.666782       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:20:59.671863       1 config.go:188] "Starting service config controller"
	I1207 21:20:59.672370       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:20:59.672789       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:20:59.672954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:20:59.679748       1 config.go:315] "Starting node config controller"
	I1207 21:20:59.679793       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:20:59.773691       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:20:59.773794       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:20:59.780428       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b9e322f9a929a334072d4474e587a9eaa44ac85866bd4d222de6223371d43f99] <==
	* W1207 21:20:39.944461       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:20:39.944659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:20:39.944677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:20:39.944685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:20:39.946168       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:20:39.946220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:20:40.796577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:20:40.796675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:20:40.850503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:20:40.850610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:20:40.930281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:20:40.930337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1207 21:20:40.934463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:20:40.934551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:20:40.956484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:20:40.956586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:20:41.046059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:20:41.046147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:20:41.143445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:20:41.143640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:20:41.184003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:20:41.184056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 21:20:41.274993       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 21:20:41.275073       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1207 21:20:43.774407       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:15:32 UTC, ends at Thu 2023-12-07 21:37:05 UTC. --
	Dec 07 21:34:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:34:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:34:49 embed-certs-598346 kubelet[3870]: E1207 21:34:49.423267    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:35:03 embed-certs-598346 kubelet[3870]: E1207 21:35:03.423397    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:35:15 embed-certs-598346 kubelet[3870]: E1207 21:35:15.423513    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:35:26 embed-certs-598346 kubelet[3870]: E1207 21:35:26.422484    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:35:40 embed-certs-598346 kubelet[3870]: E1207 21:35:40.423539    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:35:43 embed-certs-598346 kubelet[3870]: E1207 21:35:43.470375    3870 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 07 21:35:43 embed-certs-598346 kubelet[3870]: E1207 21:35:43.524106    3870 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:35:43 embed-certs-598346 kubelet[3870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:35:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:35:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:35:55 embed-certs-598346 kubelet[3870]: E1207 21:35:55.423953    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:36:10 embed-certs-598346 kubelet[3870]: E1207 21:36:10.422770    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:36:23 embed-certs-598346 kubelet[3870]: E1207 21:36:23.422514    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:36:35 embed-certs-598346 kubelet[3870]: E1207 21:36:35.422742    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:36:43 embed-certs-598346 kubelet[3870]: E1207 21:36:43.522761    3870 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:36:43 embed-certs-598346 kubelet[3870]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:36:43 embed-certs-598346 kubelet[3870]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:36:43 embed-certs-598346 kubelet[3870]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:36:48 embed-certs-598346 kubelet[3870]: E1207 21:36:48.423588    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	Dec 07 21:37:00 embed-certs-598346 kubelet[3870]: E1207 21:37:00.434481    3870 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:37:00 embed-certs-598346 kubelet[3870]: E1207 21:37:00.434538    3870 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:37:00 embed-certs-598346 kubelet[3870]: E1207 21:37:00.434804    3870 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-trhjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-pstg2_kube-system(463b12c8-de62-4ff8-a5c4-55eeb721eea8): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:37:00 embed-certs-598346 kubelet[3870]: E1207 21:37:00.434854    3870 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-pstg2" podUID="463b12c8-de62-4ff8-a5c4-55eeb721eea8"
	
	* 
	* ==> storage-provisioner [55d67718482d4572c85c9612435da05cbca02696fb9f0abe9867d2a9bb2ab0f7] <==
	* I1207 21:21:00.077379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:21:00.092380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:21:00.092666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:21:00.104745       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:21:00.106160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f730826-a22c-4e32-bb83-4169ecd2820a", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497 became leader
	I1207 21:21:00.106222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497!
	I1207 21:21:00.207069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-598346_c58c72fa-4f91-4dd3-8a04-87db3ee51497!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-598346 -n embed-certs-598346
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-598346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pstg2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2: exit status 1 (65.626207ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pstg2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-598346 describe pod metrics-server-57f55c9bc5-pstg2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (421.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:38:29.90976269 +0000 UTC m=+5834.077907654
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-275828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.873µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-275828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-275828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-275828 logs -n 25: (2.793504679s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo journalctl                       | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo docker                           | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo                                  | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo cat                              | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo containerd                       | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo systemctl                        | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo find                             | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-715748 sudo crio                             | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-715748                                       | auto-715748    | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC | 07 Dec 23 21:37 UTC |
	| start   | -p calico-715748 --memory=3072                       | calico-715748  | jenkins | v1.32.0 | 07 Dec 23 21:37 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-715748 pgrep -a                           | kindnet-715748 | jenkins | v1.32.0 | 07 Dec 23 21:38 UTC | 07 Dec 23 21:38 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:37:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:37:54.522286   59120 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:37:54.522539   59120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:37:54.522549   59120 out.go:309] Setting ErrFile to fd 2...
	I1207 21:37:54.522553   59120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:37:54.522723   59120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:37:54.523266   59120 out.go:303] Setting JSON to false
	I1207 21:37:54.524250   59120 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8421,"bootTime":1701976654,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:37:54.524309   59120 start.go:138] virtualization: kvm guest
	I1207 21:37:54.526545   59120 out.go:177] * [calico-715748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:37:54.528368   59120 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:37:54.529566   59120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:37:54.528396   59120 notify.go:220] Checking for updates...
	I1207 21:37:54.532080   59120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:37:54.533523   59120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:37:54.534843   59120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:37:54.536143   59120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:37:54.537894   59120 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:37:54.538021   59120 config.go:182] Loaded profile config "kindnet-715748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:37:54.538153   59120 config.go:182] Loaded profile config "newest-cni-155321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:37:54.538243   59120 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:37:54.574757   59120 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:37:54.576130   59120 start.go:298] selected driver: kvm2
	I1207 21:37:54.576154   59120 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:37:54.576165   59120 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:37:54.576834   59120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:37:54.576916   59120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:37:54.591035   59120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:37:54.591081   59120 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 21:37:54.591288   59120 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:37:54.591351   59120 cni.go:84] Creating CNI manager for "calico"
	I1207 21:37:54.591370   59120 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1207 21:37:54.591383   59120 start_flags.go:323] config:
	{Name:calico-715748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:37:54.591520   59120 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:37:54.593482   59120 out.go:177] * Starting control plane node calico-715748 in cluster calico-715748
	I1207 21:37:54.594778   59120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:37:54.594817   59120 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:37:54.594834   59120 cache.go:56] Caching tarball of preloaded images
	I1207 21:37:54.594920   59120 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:37:54.594932   59120 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:37:54.595045   59120 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/config.json ...
	I1207 21:37:54.595073   59120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/config.json: {Name:mk29cb3581070bd8877853b01a0010aba8ff15a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:37:54.595225   59120 start.go:365] acquiring machines lock for calico-715748: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:37:54.595264   59120 start.go:369] acquired machines lock for "calico-715748" in 23.584µs
	I1207 21:37:54.595285   59120 start.go:93] Provisioning new machine with config: &{Name:calico-715748 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:37:54.595381   59120 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 21:37:56.176777   57453 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005245 seconds
	I1207 21:37:56.176937   57453 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:37:56.200461   57453 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:37:56.737416   57453 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:37:56.737667   57453 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-715748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:37:57.253703   57453 kubeadm.go:322] [bootstrap-token] Using token: wgtfyx.sart8o41ek77n0ly
	I1207 21:37:57.255194   57453 out.go:204]   - Configuring RBAC rules ...
	I1207 21:37:57.255329   57453 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:37:57.261846   57453 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:37:57.276004   57453 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:37:57.280098   57453 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:37:57.284836   57453 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:37:57.291890   57453 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:37:57.314085   57453 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:37:57.538270   57453 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:37:57.681772   57453 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:37:57.681798   57453 kubeadm.go:322] 
	I1207 21:37:57.681875   57453 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:37:57.681886   57453 kubeadm.go:322] 
	I1207 21:37:57.682018   57453 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:37:57.682042   57453 kubeadm.go:322] 
	I1207 21:37:57.682068   57453 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:37:57.682142   57453 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:37:57.682200   57453 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:37:57.682211   57453 kubeadm.go:322] 
	I1207 21:37:57.682278   57453 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:37:57.682289   57453 kubeadm.go:322] 
	I1207 21:37:57.682363   57453 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:37:57.682375   57453 kubeadm.go:322] 
	I1207 21:37:57.682439   57453 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:37:57.682535   57453 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:37:57.682627   57453 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:37:57.682639   57453 kubeadm.go:322] 
	I1207 21:37:57.682743   57453 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:37:57.682842   57453 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:37:57.682852   57453 kubeadm.go:322] 
	I1207 21:37:57.682953   57453 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wgtfyx.sart8o41ek77n0ly \
	I1207 21:37:57.683083   57453 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:37:57.683111   57453 kubeadm.go:322] 	--control-plane 
	I1207 21:37:57.683120   57453 kubeadm.go:322] 
	I1207 21:37:57.683219   57453 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:37:57.683230   57453 kubeadm.go:322] 
	I1207 21:37:57.683332   57453 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wgtfyx.sart8o41ek77n0ly \
	I1207 21:37:57.683462   57453 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:37:57.685608   57453 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:37:57.685642   57453 cni.go:84] Creating CNI manager for "kindnet"
	I1207 21:37:57.687536   57453 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1207 21:37:54.597123   59120 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 21:37:54.597252   59120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:37:54.597297   59120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:37:54.611524   59120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I1207 21:37:54.611925   59120 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:37:54.612410   59120 main.go:141] libmachine: Using API Version  1
	I1207 21:37:54.612425   59120 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:37:54.612799   59120 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:37:54.613018   59120 main.go:141] libmachine: (calico-715748) Calling .GetMachineName
	I1207 21:37:54.613178   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:37:54.613358   59120 start.go:159] libmachine.API.Create for "calico-715748" (driver="kvm2")
	I1207 21:37:54.613390   59120 client.go:168] LocalClient.Create starting
	I1207 21:37:54.613423   59120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:37:54.613480   59120 main.go:141] libmachine: Decoding PEM data...
	I1207 21:37:54.613506   59120 main.go:141] libmachine: Parsing certificate...
	I1207 21:37:54.613577   59120 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:37:54.613606   59120 main.go:141] libmachine: Decoding PEM data...
	I1207 21:37:54.613627   59120 main.go:141] libmachine: Parsing certificate...
	I1207 21:37:54.613651   59120 main.go:141] libmachine: Running pre-create checks...
	I1207 21:37:54.613665   59120 main.go:141] libmachine: (calico-715748) Calling .PreCreateCheck
	I1207 21:37:54.614124   59120 main.go:141] libmachine: (calico-715748) Calling .GetConfigRaw
	I1207 21:37:54.614638   59120 main.go:141] libmachine: Creating machine...
	I1207 21:37:54.614658   59120 main.go:141] libmachine: (calico-715748) Calling .Create
	I1207 21:37:54.614799   59120 main.go:141] libmachine: (calico-715748) Creating KVM machine...
	I1207 21:37:54.615887   59120 main.go:141] libmachine: (calico-715748) DBG | found existing default KVM network
	I1207 21:37:54.617129   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:54.616917   59143 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:d9:97} reservation:<nil>}
	I1207 21:37:54.618177   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:54.618092   59143 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002aa6c0}
	I1207 21:37:54.623486   59120 main.go:141] libmachine: (calico-715748) DBG | trying to create private KVM network mk-calico-715748 192.168.50.0/24...
	I1207 21:37:54.700119   59120 main.go:141] libmachine: (calico-715748) DBG | private KVM network mk-calico-715748 192.168.50.0/24 created
	I1207 21:37:54.700351   59120 main.go:141] libmachine: (calico-715748) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748 ...
	I1207 21:37:54.700398   59120 main.go:141] libmachine: (calico-715748) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:37:54.700424   59120 main.go:141] libmachine: (calico-715748) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:37:54.700454   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:54.700316   59143 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:37:54.914681   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:54.914540   59143 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa...
	I1207 21:37:55.302921   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:55.302760   59143 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/calico-715748.rawdisk...
	I1207 21:37:55.302977   59120 main.go:141] libmachine: (calico-715748) DBG | Writing magic tar header
	I1207 21:37:55.303001   59120 main.go:141] libmachine: (calico-715748) DBG | Writing SSH key tar header
	I1207 21:37:55.303018   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:55.302915   59143 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748 ...
	I1207 21:37:55.303039   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748
	I1207 21:37:55.303112   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748 (perms=drwx------)
	I1207 21:37:55.303136   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:37:55.303151   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:37:55.303164   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:37:55.303180   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:37:55.303195   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:37:55.303210   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:37:55.303220   59120 main.go:141] libmachine: (calico-715748) DBG | Checking permissions on dir: /home
	I1207 21:37:55.303236   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:37:55.303253   59120 main.go:141] libmachine: (calico-715748) DBG | Skipping /home - not owner
	I1207 21:37:55.303286   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:37:55.303302   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:37:55.303310   59120 main.go:141] libmachine: (calico-715748) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:37:55.303323   59120 main.go:141] libmachine: (calico-715748) Creating domain...
	I1207 21:37:55.304299   59120 main.go:141] libmachine: (calico-715748) define libvirt domain using xml: 
	I1207 21:37:55.304326   59120 main.go:141] libmachine: (calico-715748) <domain type='kvm'>
	I1207 21:37:55.304338   59120 main.go:141] libmachine: (calico-715748)   <name>calico-715748</name>
	I1207 21:37:55.304347   59120 main.go:141] libmachine: (calico-715748)   <memory unit='MiB'>3072</memory>
	I1207 21:37:55.304357   59120 main.go:141] libmachine: (calico-715748)   <vcpu>2</vcpu>
	I1207 21:37:55.304369   59120 main.go:141] libmachine: (calico-715748)   <features>
	I1207 21:37:55.304382   59120 main.go:141] libmachine: (calico-715748)     <acpi/>
	I1207 21:37:55.304394   59120 main.go:141] libmachine: (calico-715748)     <apic/>
	I1207 21:37:55.304408   59120 main.go:141] libmachine: (calico-715748)     <pae/>
	I1207 21:37:55.304418   59120 main.go:141] libmachine: (calico-715748)     
	I1207 21:37:55.304427   59120 main.go:141] libmachine: (calico-715748)   </features>
	I1207 21:37:55.304460   59120 main.go:141] libmachine: (calico-715748)   <cpu mode='host-passthrough'>
	I1207 21:37:55.304484   59120 main.go:141] libmachine: (calico-715748)   
	I1207 21:37:55.304513   59120 main.go:141] libmachine: (calico-715748)   </cpu>
	I1207 21:37:55.304526   59120 main.go:141] libmachine: (calico-715748)   <os>
	I1207 21:37:55.304536   59120 main.go:141] libmachine: (calico-715748)     <type>hvm</type>
	I1207 21:37:55.304544   59120 main.go:141] libmachine: (calico-715748)     <boot dev='cdrom'/>
	I1207 21:37:55.304555   59120 main.go:141] libmachine: (calico-715748)     <boot dev='hd'/>
	I1207 21:37:55.304572   59120 main.go:141] libmachine: (calico-715748)     <bootmenu enable='no'/>
	I1207 21:37:55.304584   59120 main.go:141] libmachine: (calico-715748)   </os>
	I1207 21:37:55.304593   59120 main.go:141] libmachine: (calico-715748)   <devices>
	I1207 21:37:55.304607   59120 main.go:141] libmachine: (calico-715748)     <disk type='file' device='cdrom'>
	I1207 21:37:55.304623   59120 main.go:141] libmachine: (calico-715748)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/boot2docker.iso'/>
	I1207 21:37:55.304637   59120 main.go:141] libmachine: (calico-715748)       <target dev='hdc' bus='scsi'/>
	I1207 21:37:55.304648   59120 main.go:141] libmachine: (calico-715748)       <readonly/>
	I1207 21:37:55.304658   59120 main.go:141] libmachine: (calico-715748)     </disk>
	I1207 21:37:55.304673   59120 main.go:141] libmachine: (calico-715748)     <disk type='file' device='disk'>
	I1207 21:37:55.304688   59120 main.go:141] libmachine: (calico-715748)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:37:55.304706   59120 main.go:141] libmachine: (calico-715748)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/calico-715748.rawdisk'/>
	I1207 21:37:55.304719   59120 main.go:141] libmachine: (calico-715748)       <target dev='hda' bus='virtio'/>
	I1207 21:37:55.304731   59120 main.go:141] libmachine: (calico-715748)     </disk>
	I1207 21:37:55.304748   59120 main.go:141] libmachine: (calico-715748)     <interface type='network'>
	I1207 21:37:55.304763   59120 main.go:141] libmachine: (calico-715748)       <source network='mk-calico-715748'/>
	I1207 21:37:55.304772   59120 main.go:141] libmachine: (calico-715748)       <model type='virtio'/>
	I1207 21:37:55.304785   59120 main.go:141] libmachine: (calico-715748)     </interface>
	I1207 21:37:55.304802   59120 main.go:141] libmachine: (calico-715748)     <interface type='network'>
	I1207 21:37:55.304816   59120 main.go:141] libmachine: (calico-715748)       <source network='default'/>
	I1207 21:37:55.304827   59120 main.go:141] libmachine: (calico-715748)       <model type='virtio'/>
	I1207 21:37:55.304837   59120 main.go:141] libmachine: (calico-715748)     </interface>
	I1207 21:37:55.304845   59120 main.go:141] libmachine: (calico-715748)     <serial type='pty'>
	I1207 21:37:55.304858   59120 main.go:141] libmachine: (calico-715748)       <target port='0'/>
	I1207 21:37:55.304871   59120 main.go:141] libmachine: (calico-715748)     </serial>
	I1207 21:37:55.304884   59120 main.go:141] libmachine: (calico-715748)     <console type='pty'>
	I1207 21:37:55.304894   59120 main.go:141] libmachine: (calico-715748)       <target type='serial' port='0'/>
	I1207 21:37:55.304907   59120 main.go:141] libmachine: (calico-715748)     </console>
	I1207 21:37:55.304919   59120 main.go:141] libmachine: (calico-715748)     <rng model='virtio'>
	I1207 21:37:55.304934   59120 main.go:141] libmachine: (calico-715748)       <backend model='random'>/dev/random</backend>
	I1207 21:37:55.304949   59120 main.go:141] libmachine: (calico-715748)     </rng>
	I1207 21:37:55.304963   59120 main.go:141] libmachine: (calico-715748)     
	I1207 21:37:55.304979   59120 main.go:141] libmachine: (calico-715748)     
	I1207 21:37:55.304992   59120 main.go:141] libmachine: (calico-715748)   </devices>
	I1207 21:37:55.305000   59120 main.go:141] libmachine: (calico-715748) </domain>
	I1207 21:37:55.305016   59120 main.go:141] libmachine: (calico-715748) 
	I1207 21:37:55.308993   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:46:1d:96 in network default
	I1207 21:37:55.309697   59120 main.go:141] libmachine: (calico-715748) Ensuring networks are active...
	I1207 21:37:55.309723   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:55.310452   59120 main.go:141] libmachine: (calico-715748) Ensuring network default is active
	I1207 21:37:55.310773   59120 main.go:141] libmachine: (calico-715748) Ensuring network mk-calico-715748 is active
	I1207 21:37:55.311274   59120 main.go:141] libmachine: (calico-715748) Getting domain xml...
	I1207 21:37:55.312006   59120 main.go:141] libmachine: (calico-715748) Creating domain...
	I1207 21:37:56.640086   59120 main.go:141] libmachine: (calico-715748) Waiting to get IP...
	I1207 21:37:56.640885   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:56.641259   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:56.641312   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:56.641259   59143 retry.go:31] will retry after 207.463252ms: waiting for machine to come up
	I1207 21:37:56.850842   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:56.851324   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:56.851355   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:56.851284   59143 retry.go:31] will retry after 339.385331ms: waiting for machine to come up
	I1207 21:37:57.191760   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:57.192195   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:57.192223   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:57.192146   59143 retry.go:31] will retry after 427.779084ms: waiting for machine to come up
	I1207 21:37:57.621671   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:57.622166   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:57.622194   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:57.622122   59143 retry.go:31] will retry after 419.69932ms: waiting for machine to come up
	I1207 21:37:58.043769   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:58.044468   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:58.044495   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:58.044406   59143 retry.go:31] will retry after 710.5295ms: waiting for machine to come up
	I1207 21:37:58.756744   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:58.757395   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:58.757426   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:58.757336   59143 retry.go:31] will retry after 719.68673ms: waiting for machine to come up
	I1207 21:37:59.478257   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:37:59.478702   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:37:59.478726   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:37:59.478669   59143 retry.go:31] will retry after 999.430673ms: waiting for machine to come up
	I1207 21:37:57.689053   57453 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 21:37:57.707481   57453 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1207 21:37:57.707505   57453 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1207 21:37:57.731082   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 21:37:58.756384   57453 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.025265016s)
	I1207 21:37:58.756431   57453 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:37:58.756548   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:37:58.756567   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=kindnet-715748 minikube.k8s.io/updated_at=2023_12_07T21_37_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:37:58.788100   57453 ops.go:34] apiserver oom_adj: -16
	I1207 21:37:58.942866   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:37:59.047085   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:37:59.640254   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:00.139716   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:00.640623   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:01.140241   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:01.639709   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:02.140128   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:00.479265   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:00.479779   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:00.479807   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:00.479725   59143 retry.go:31] will retry after 1.000226302s: waiting for machine to come up
	I1207 21:38:01.481028   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:01.481437   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:01.481480   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:01.481403   59143 retry.go:31] will retry after 1.225050304s: waiting for machine to come up
	I1207 21:38:02.707912   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:02.708288   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:02.708319   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:02.708241   59143 retry.go:31] will retry after 1.965571682s: waiting for machine to come up
	I1207 21:38:02.639578   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:03.140642   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:03.640099   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:04.139607   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:04.640557   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:05.139712   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:05.640254   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:06.140409   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:06.639986   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:07.140499   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:04.675183   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:04.675734   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:04.675757   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:04.675683   59143 retry.go:31] will retry after 1.819817276s: waiting for machine to come up
	I1207 21:38:06.496950   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:06.497501   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:06.497531   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:06.497453   59143 retry.go:31] will retry after 2.72743827s: waiting for machine to come up
	I1207 21:38:09.227208   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:09.227784   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:09.227815   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:09.227752   59143 retry.go:31] will retry after 3.488422095s: waiting for machine to come up
	I1207 21:38:07.639920   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:08.140569   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:08.639858   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:09.140406   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:09.640234   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:10.140017   57453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:38:10.226894   57453 kubeadm.go:1088] duration metric: took 11.470443797s to wait for elevateKubeSystemPrivileges.
	I1207 21:38:10.226935   57453 kubeadm.go:406] StartCluster complete in 26.156876929s
	I1207 21:38:10.226956   57453 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:10.227040   57453 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:38:10.228346   57453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:10.228552   57453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:38:10.228689   57453 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:38:10.228755   57453 addons.go:69] Setting storage-provisioner=true in profile "kindnet-715748"
	I1207 21:38:10.228781   57453 addons.go:69] Setting default-storageclass=true in profile "kindnet-715748"
	I1207 21:38:10.228815   57453 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-715748"
	I1207 21:38:10.228787   57453 config.go:182] Loaded profile config "kindnet-715748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:38:10.228791   57453 addons.go:231] Setting addon storage-provisioner=true in "kindnet-715748"
	I1207 21:38:10.228982   57453 host.go:66] Checking if "kindnet-715748" exists ...
	I1207 21:38:10.229401   57453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:38:10.229414   57453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:38:10.229436   57453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:38:10.229438   57453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:38:10.245100   57453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I1207 21:38:10.245527   57453 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:38:10.246041   57453 main.go:141] libmachine: Using API Version  1
	I1207 21:38:10.246069   57453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:38:10.246447   57453 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:38:10.247027   57453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:38:10.247056   57453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:38:10.247086   57453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I1207 21:38:10.247517   57453 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:38:10.248021   57453 main.go:141] libmachine: Using API Version  1
	I1207 21:38:10.248050   57453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:38:10.248431   57453 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:38:10.248609   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetState
	I1207 21:38:10.251930   57453 addons.go:231] Setting addon default-storageclass=true in "kindnet-715748"
	I1207 21:38:10.251981   57453 host.go:66] Checking if "kindnet-715748" exists ...
	I1207 21:38:10.252391   57453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:38:10.252423   57453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:38:10.261742   57453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I1207 21:38:10.262211   57453 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:38:10.262787   57453 main.go:141] libmachine: Using API Version  1
	I1207 21:38:10.262808   57453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:38:10.263161   57453 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:38:10.263355   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetState
	I1207 21:38:10.265541   57453 main.go:141] libmachine: (kindnet-715748) Calling .DriverName
	I1207 21:38:10.271091   57453 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:38:10.268328   57453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46457
	I1207 21:38:10.271607   57453 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:38:10.272850   57453 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:38:10.272943   57453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:38:10.272970   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHHostname
	I1207 21:38:10.273360   57453 main.go:141] libmachine: Using API Version  1
	I1207 21:38:10.273384   57453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:38:10.273728   57453 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:38:10.274344   57453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:38:10.274391   57453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:38:10.276291   57453 main.go:141] libmachine: (kindnet-715748) DBG | domain kindnet-715748 has defined MAC address 52:54:00:31:2f:08 in network mk-kindnet-715748
	I1207 21:38:10.276818   57453 main.go:141] libmachine: (kindnet-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2f:08", ip: ""} in network mk-kindnet-715748: {Iface:virbr4 ExpiryTime:2023-12-07 22:37:24 +0000 UTC Type:0 Mac:52:54:00:31:2f:08 Iaid: IPaddr:192.168.72.212 Prefix:24 Hostname:kindnet-715748 Clientid:01:52:54:00:31:2f:08}
	I1207 21:38:10.276846   57453 main.go:141] libmachine: (kindnet-715748) DBG | domain kindnet-715748 has defined IP address 192.168.72.212 and MAC address 52:54:00:31:2f:08 in network mk-kindnet-715748
	I1207 21:38:10.277038   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHPort
	I1207 21:38:10.277216   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHKeyPath
	I1207 21:38:10.277400   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHUsername
	I1207 21:38:10.277568   57453 sshutil.go:53] new ssh client: &{IP:192.168.72.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/kindnet-715748/id_rsa Username:docker}
	I1207 21:38:10.278428   57453 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-715748" context rescaled to 1 replicas
	I1207 21:38:10.278457   57453 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:38:10.280177   57453 out.go:177] * Verifying Kubernetes components...
	I1207 21:38:10.281722   57453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:38:10.289885   57453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36971
	I1207 21:38:10.290320   57453 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:38:10.290861   57453 main.go:141] libmachine: Using API Version  1
	I1207 21:38:10.290892   57453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:38:10.291231   57453 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:38:10.291441   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetState
	I1207 21:38:10.293162   57453 main.go:141] libmachine: (kindnet-715748) Calling .DriverName
	I1207 21:38:10.293452   57453 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:38:10.293474   57453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:38:10.293491   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHHostname
	I1207 21:38:10.296666   57453 main.go:141] libmachine: (kindnet-715748) DBG | domain kindnet-715748 has defined MAC address 52:54:00:31:2f:08 in network mk-kindnet-715748
	I1207 21:38:10.297076   57453 main.go:141] libmachine: (kindnet-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:2f:08", ip: ""} in network mk-kindnet-715748: {Iface:virbr4 ExpiryTime:2023-12-07 22:37:24 +0000 UTC Type:0 Mac:52:54:00:31:2f:08 Iaid: IPaddr:192.168.72.212 Prefix:24 Hostname:kindnet-715748 Clientid:01:52:54:00:31:2f:08}
	I1207 21:38:10.297103   57453 main.go:141] libmachine: (kindnet-715748) DBG | domain kindnet-715748 has defined IP address 192.168.72.212 and MAC address 52:54:00:31:2f:08 in network mk-kindnet-715748
	I1207 21:38:10.297346   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHPort
	I1207 21:38:10.297866   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHKeyPath
	I1207 21:38:10.298042   57453 main.go:141] libmachine: (kindnet-715748) Calling .GetSSHUsername
	I1207 21:38:10.298208   57453 sshutil.go:53] new ssh client: &{IP:192.168.72.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/kindnet-715748/id_rsa Username:docker}
	I1207 21:38:10.403058   57453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:38:10.404324   57453 node_ready.go:35] waiting up to 15m0s for node "kindnet-715748" to be "Ready" ...
	I1207 21:38:10.428958   57453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:38:10.496142   57453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:38:11.120737   57453 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:38:11.336642   57453 main.go:141] libmachine: Making call to close driver server
	I1207 21:38:11.336672   57453 main.go:141] libmachine: (kindnet-715748) Calling .Close
	I1207 21:38:11.336649   57453 main.go:141] libmachine: Making call to close driver server
	I1207 21:38:11.336759   57453 main.go:141] libmachine: (kindnet-715748) Calling .Close
	I1207 21:38:11.336959   57453 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:38:11.337056   57453 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:38:11.337068   57453 main.go:141] libmachine: (kindnet-715748) DBG | Closing plugin on server side
	I1207 21:38:11.337070   57453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:38:11.337072   57453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:38:11.337069   57453 main.go:141] libmachine: (kindnet-715748) DBG | Closing plugin on server side
	I1207 21:38:11.337086   57453 main.go:141] libmachine: Making call to close driver server
	I1207 21:38:11.337096   57453 main.go:141] libmachine: (kindnet-715748) Calling .Close
	I1207 21:38:11.337087   57453 main.go:141] libmachine: Making call to close driver server
	I1207 21:38:11.337125   57453 main.go:141] libmachine: (kindnet-715748) Calling .Close
	I1207 21:38:11.337303   57453 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:38:11.337516   57453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:38:11.337328   57453 main.go:141] libmachine: (kindnet-715748) DBG | Closing plugin on server side
	I1207 21:38:11.337470   57453 main.go:141] libmachine: (kindnet-715748) DBG | Closing plugin on server side
	I1207 21:38:11.337475   57453 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:38:11.337583   57453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:38:11.355472   57453 main.go:141] libmachine: Making call to close driver server
	I1207 21:38:11.355497   57453 main.go:141] libmachine: (kindnet-715748) Calling .Close
	I1207 21:38:11.355768   57453 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:38:11.355787   57453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:38:11.357450   57453 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 21:38:11.358773   57453 addons.go:502] enable addons completed in 1.130083763s: enabled=[storage-provisioner default-storageclass]
	I1207 21:38:12.419625   57453 node_ready.go:58] node "kindnet-715748" has status "Ready":"False"
	I1207 21:38:12.719302   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:12.719772   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find current IP address of domain calico-715748 in network mk-calico-715748
	I1207 21:38:12.719803   59120 main.go:141] libmachine: (calico-715748) DBG | I1207 21:38:12.719719   59143 retry.go:31] will retry after 4.870487444s: waiting for machine to come up
	I1207 21:38:14.421254   57453 node_ready.go:58] node "kindnet-715748" has status "Ready":"False"
	I1207 21:38:15.427292   57453 node_ready.go:49] node "kindnet-715748" has status "Ready":"True"
	I1207 21:38:15.427313   57453 node_ready.go:38] duration metric: took 5.022948584s waiting for node "kindnet-715748" to be "Ready" ...
	I1207 21:38:15.427322   57453 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:38:15.437506   57453 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-sbghn" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:16.974853   57453 pod_ready.go:92] pod "coredns-5dd5756b68-sbghn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:16.974877   57453 pod_ready.go:81] duration metric: took 1.537346384s waiting for pod "coredns-5dd5756b68-sbghn" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:16.974886   57453 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:17.591212   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:17.591749   59120 main.go:141] libmachine: (calico-715748) Found IP for machine: 192.168.50.4
	I1207 21:38:17.591766   59120 main.go:141] libmachine: (calico-715748) Reserving static IP address...
	I1207 21:38:17.591776   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has current primary IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:17.592076   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find host DHCP lease matching {name: "calico-715748", mac: "52:54:00:59:f9:ea", ip: "192.168.50.4"} in network mk-calico-715748
	I1207 21:38:17.666320   59120 main.go:141] libmachine: (calico-715748) DBG | Getting to WaitForSSH function...
	I1207 21:38:17.666354   59120 main.go:141] libmachine: (calico-715748) Reserved static IP address: 192.168.50.4
	I1207 21:38:17.666377   59120 main.go:141] libmachine: (calico-715748) Waiting for SSH to be available...
	I1207 21:38:17.668972   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:17.669280   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748
	I1207 21:38:17.669312   59120 main.go:141] libmachine: (calico-715748) DBG | unable to find defined IP address of network mk-calico-715748 interface with MAC address 52:54:00:59:f9:ea
	I1207 21:38:17.669441   59120 main.go:141] libmachine: (calico-715748) DBG | Using SSH client type: external
	I1207 21:38:17.669466   59120 main.go:141] libmachine: (calico-715748) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa (-rw-------)
	I1207 21:38:17.669507   59120 main.go:141] libmachine: (calico-715748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:38:17.669548   59120 main.go:141] libmachine: (calico-715748) DBG | About to run SSH command:
	I1207 21:38:17.669566   59120 main.go:141] libmachine: (calico-715748) DBG | exit 0
	I1207 21:38:17.673355   59120 main.go:141] libmachine: (calico-715748) DBG | SSH cmd err, output: exit status 255: 
	I1207 21:38:17.673376   59120 main.go:141] libmachine: (calico-715748) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1207 21:38:17.673384   59120 main.go:141] libmachine: (calico-715748) DBG | command : exit 0
	I1207 21:38:17.673393   59120 main.go:141] libmachine: (calico-715748) DBG | err     : exit status 255
	I1207 21:38:17.673415   59120 main.go:141] libmachine: (calico-715748) DBG | output  : 
	I1207 21:38:17.998064   57453 pod_ready.go:92] pod "etcd-kindnet-715748" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:17.998089   57453 pod_ready.go:81] duration metric: took 1.023197188s waiting for pod "etcd-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:17.998100   57453 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.003449   57453 pod_ready.go:92] pod "kube-apiserver-kindnet-715748" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:18.003468   57453 pod_ready.go:81] duration metric: took 5.36147ms waiting for pod "kube-apiserver-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.003476   57453 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.220013   57453 pod_ready.go:92] pod "kube-controller-manager-kindnet-715748" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:18.220035   57453 pod_ready.go:81] duration metric: took 216.553464ms waiting for pod "kube-controller-manager-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.220045   57453 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-vp8t6" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.620526   57453 pod_ready.go:92] pod "kube-proxy-vp8t6" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:18.620551   57453 pod_ready.go:81] duration metric: took 400.500048ms waiting for pod "kube-proxy-vp8t6" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:18.620563   57453 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:19.020686   57453 pod_ready.go:92] pod "kube-scheduler-kindnet-715748" in "kube-system" namespace has status "Ready":"True"
	I1207 21:38:19.020706   57453 pod_ready.go:81] duration metric: took 400.13579ms waiting for pod "kube-scheduler-kindnet-715748" in "kube-system" namespace to be "Ready" ...
	I1207 21:38:19.020716   57453 pod_ready.go:38] duration metric: took 3.593386043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:38:19.020730   57453 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:38:19.020775   57453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:38:19.037588   57453 api_server.go:72] duration metric: took 8.759105363s to wait for apiserver process to appear ...
	I1207 21:38:19.037618   57453 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:38:19.037634   57453 api_server.go:253] Checking apiserver healthz at https://192.168.72.212:8443/healthz ...
	I1207 21:38:19.042503   57453 api_server.go:279] https://192.168.72.212:8443/healthz returned 200:
	ok
	I1207 21:38:19.043746   57453 api_server.go:141] control plane version: v1.28.4
	I1207 21:38:19.043770   57453 api_server.go:131] duration metric: took 6.145009ms to wait for apiserver health ...
	I1207 21:38:19.043779   57453 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:38:19.223008   57453 system_pods.go:59] 8 kube-system pods found
	I1207 21:38:19.223041   57453 system_pods.go:61] "coredns-5dd5756b68-sbghn" [4cd63ac7-18f7-464e-8e12-9a4c77632d17] Running
	I1207 21:38:19.223046   57453 system_pods.go:61] "etcd-kindnet-715748" [e4a27bca-ef91-4907-99e5-3486ed7d1d29] Running
	I1207 21:38:19.223051   57453 system_pods.go:61] "kindnet-f5p4g" [7d1ce3bd-6313-43a2-b6ce-42f0b28f8639] Running
	I1207 21:38:19.223055   57453 system_pods.go:61] "kube-apiserver-kindnet-715748" [caf8954e-a302-40a6-a73e-dabd76991be7] Running
	I1207 21:38:19.223060   57453 system_pods.go:61] "kube-controller-manager-kindnet-715748" [ec429e95-ee34-46ff-b2b4-be58915f54c9] Running
	I1207 21:38:19.223064   57453 system_pods.go:61] "kube-proxy-vp8t6" [a7cd4a04-8002-4af2-8c7c-2e6b56ae51ce] Running
	I1207 21:38:19.223068   57453 system_pods.go:61] "kube-scheduler-kindnet-715748" [d5333f94-a8b3-47d9-b20c-5be37b7dbdde] Running
	I1207 21:38:19.223072   57453 system_pods.go:61] "storage-provisioner" [4b6632b1-a399-4c39-bc03-2441c82b215f] Running
	I1207 21:38:19.223079   57453 system_pods.go:74] duration metric: took 179.293563ms to wait for pod list to return data ...
	I1207 21:38:19.223089   57453 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:38:19.420011   57453 default_sa.go:45] found service account: "default"
	I1207 21:38:19.420038   57453 default_sa.go:55] duration metric: took 196.941719ms for default service account to be created ...
	I1207 21:38:19.420049   57453 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:38:19.623912   57453 system_pods.go:86] 8 kube-system pods found
	I1207 21:38:19.623953   57453 system_pods.go:89] "coredns-5dd5756b68-sbghn" [4cd63ac7-18f7-464e-8e12-9a4c77632d17] Running
	I1207 21:38:19.623964   57453 system_pods.go:89] "etcd-kindnet-715748" [e4a27bca-ef91-4907-99e5-3486ed7d1d29] Running
	I1207 21:38:19.623971   57453 system_pods.go:89] "kindnet-f5p4g" [7d1ce3bd-6313-43a2-b6ce-42f0b28f8639] Running
	I1207 21:38:19.623977   57453 system_pods.go:89] "kube-apiserver-kindnet-715748" [caf8954e-a302-40a6-a73e-dabd76991be7] Running
	I1207 21:38:19.623984   57453 system_pods.go:89] "kube-controller-manager-kindnet-715748" [ec429e95-ee34-46ff-b2b4-be58915f54c9] Running
	I1207 21:38:19.623990   57453 system_pods.go:89] "kube-proxy-vp8t6" [a7cd4a04-8002-4af2-8c7c-2e6b56ae51ce] Running
	I1207 21:38:19.623996   57453 system_pods.go:89] "kube-scheduler-kindnet-715748" [d5333f94-a8b3-47d9-b20c-5be37b7dbdde] Running
	I1207 21:38:19.624002   57453 system_pods.go:89] "storage-provisioner" [4b6632b1-a399-4c39-bc03-2441c82b215f] Running
	I1207 21:38:19.624012   57453 system_pods.go:126] duration metric: took 203.956085ms to wait for k8s-apps to be running ...
	I1207 21:38:19.624030   57453 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:38:19.624078   57453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:38:19.640954   57453 system_svc.go:56] duration metric: took 16.919877ms WaitForService to wait for kubelet.
	I1207 21:38:19.640978   57453 kubeadm.go:581] duration metric: took 9.362498189s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:38:19.640999   57453 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:38:19.820488   57453 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:38:19.820513   57453 node_conditions.go:123] node cpu capacity is 2
	I1207 21:38:19.820523   57453 node_conditions.go:105] duration metric: took 179.518709ms to run NodePressure ...
	I1207 21:38:19.820533   57453 start.go:228] waiting for startup goroutines ...
	I1207 21:38:19.820539   57453 start.go:233] waiting for cluster config update ...
	I1207 21:38:19.820547   57453 start.go:242] writing updated cluster config ...
	I1207 21:38:19.820833   57453 ssh_runner.go:195] Run: rm -f paused
	I1207 21:38:19.866861   57453 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:38:19.869018   57453 out.go:177] * Done! kubectl is now configured to use "kindnet-715748" cluster and "default" namespace by default
	I1207 21:38:20.674009   59120 main.go:141] libmachine: (calico-715748) DBG | Getting to WaitForSSH function...
	I1207 21:38:20.676674   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.677058   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:20.677087   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.677235   59120 main.go:141] libmachine: (calico-715748) DBG | Using SSH client type: external
	I1207 21:38:20.677259   59120 main.go:141] libmachine: (calico-715748) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa (-rw-------)
	I1207 21:38:20.677288   59120 main.go:141] libmachine: (calico-715748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:38:20.677303   59120 main.go:141] libmachine: (calico-715748) DBG | About to run SSH command:
	I1207 21:38:20.677359   59120 main.go:141] libmachine: (calico-715748) DBG | exit 0
	I1207 21:38:20.765420   59120 main.go:141] libmachine: (calico-715748) DBG | SSH cmd err, output: <nil>: 
	I1207 21:38:20.765686   59120 main.go:141] libmachine: (calico-715748) KVM machine creation complete!
	I1207 21:38:20.766033   59120 main.go:141] libmachine: (calico-715748) Calling .GetConfigRaw
	I1207 21:38:20.766545   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:20.766731   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:20.766898   59120 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 21:38:20.766916   59120 main.go:141] libmachine: (calico-715748) Calling .GetState
	I1207 21:38:20.768031   59120 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 21:38:20.768048   59120 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 21:38:20.768055   59120 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 21:38:20.768061   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:20.770163   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.770630   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:20.770667   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.770831   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:20.771022   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:20.771197   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:20.771344   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:20.771503   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:20.771838   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:20.771853   59120 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 21:38:20.881208   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:38:20.881235   59120 main.go:141] libmachine: Detecting the provisioner...
	I1207 21:38:20.881275   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:20.883952   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.884374   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:20.884401   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:20.884550   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:20.884766   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:20.884955   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:20.885123   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:20.885286   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:20.885646   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:20.885670   59120 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 21:38:21.006578   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 21:38:21.006639   59120 main.go:141] libmachine: found compatible host: buildroot
	I1207 21:38:21.006650   59120 main.go:141] libmachine: Provisioning with buildroot...
	I1207 21:38:21.006661   59120 main.go:141] libmachine: (calico-715748) Calling .GetMachineName
	I1207 21:38:21.006901   59120 buildroot.go:166] provisioning hostname "calico-715748"
	I1207 21:38:21.006918   59120 main.go:141] libmachine: (calico-715748) Calling .GetMachineName
	I1207 21:38:21.007066   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:21.009719   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.010137   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.010168   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.010364   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:21.010588   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.010760   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.010907   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:21.011092   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:21.011443   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:21.011460   59120 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-715748 && echo "calico-715748" | sudo tee /etc/hostname
	I1207 21:38:21.139711   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-715748
	
	I1207 21:38:21.139738   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:21.142562   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.142936   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.142991   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.143128   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:21.143284   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.143383   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.143467   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:21.143665   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:21.143985   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:21.144003   59120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-715748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-715748/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-715748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:38:21.261943   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:38:21.261971   59120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:38:21.262004   59120 buildroot.go:174] setting up certificates
	I1207 21:38:21.262023   59120 provision.go:83] configureAuth start
	I1207 21:38:21.262044   59120 main.go:141] libmachine: (calico-715748) Calling .GetMachineName
	I1207 21:38:21.262340   59120 main.go:141] libmachine: (calico-715748) Calling .GetIP
	I1207 21:38:21.264541   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.264928   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.264954   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.265134   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:21.267181   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.267515   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.267543   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.267658   59120 provision.go:138] copyHostCerts
	I1207 21:38:21.267710   59120 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:38:21.267728   59120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:38:21.267789   59120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:38:21.267883   59120 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:38:21.267890   59120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:38:21.267918   59120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:38:21.267988   59120 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:38:21.267995   59120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:38:21.268017   59120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:38:21.268071   59120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.calico-715748 san=[192.168.50.4 192.168.50.4 localhost 127.0.0.1 minikube calico-715748]
	I1207 21:38:21.562614   59120 provision.go:172] copyRemoteCerts
	I1207 21:38:21.562673   59120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:38:21.562695   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:21.565266   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.565614   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.565647   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.565807   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:21.566034   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.566199   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:21.566343   59120 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa Username:docker}
	I1207 21:38:21.651848   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:38:21.674507   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 21:38:21.696288   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:38:21.717682   59120 provision.go:86] duration metric: configureAuth took 455.646567ms
	I1207 21:38:21.717704   59120 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:38:21.717888   59120 config.go:182] Loaded profile config "calico-715748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:38:21.717983   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:21.720757   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.721158   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:21.721195   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:21.721308   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:21.721522   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.721667   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:21.721812   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:21.721969   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:21.722318   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:21.722348   59120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:38:22.030523   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:38:22.030553   59120 main.go:141] libmachine: Checking connection to Docker...
	I1207 21:38:22.030572   59120 main.go:141] libmachine: (calico-715748) Calling .GetURL
	I1207 21:38:22.031845   59120 main.go:141] libmachine: (calico-715748) DBG | Using libvirt version 6000000
	I1207 21:38:22.034332   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.034714   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.034763   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.034902   59120 main.go:141] libmachine: Docker is up and running!
	I1207 21:38:22.034921   59120 main.go:141] libmachine: Reticulating splines...
	I1207 21:38:22.034941   59120 client.go:171] LocalClient.Create took 27.421528671s
	I1207 21:38:22.034969   59120 start.go:167] duration metric: libmachine.API.Create for "calico-715748" took 27.421612071s
	I1207 21:38:22.034984   59120 start.go:300] post-start starting for "calico-715748" (driver="kvm2")
	I1207 21:38:22.034995   59120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:38:22.035017   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:22.035274   59120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:38:22.035305   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:22.037635   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.037969   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.037991   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.038142   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:22.038316   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:22.038485   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:22.038627   59120 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa Username:docker}
	I1207 21:38:22.123247   59120 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:38:22.127579   59120 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:38:22.127605   59120 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:38:22.127668   59120 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:38:22.127734   59120 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:38:22.127879   59120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:38:22.136142   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:38:22.158632   59120 start.go:303] post-start completed in 123.635528ms
	I1207 21:38:22.158683   59120 main.go:141] libmachine: (calico-715748) Calling .GetConfigRaw
	I1207 21:38:22.159229   59120 main.go:141] libmachine: (calico-715748) Calling .GetIP
	I1207 21:38:22.161796   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.162130   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.162156   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.162437   59120 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/config.json ...
	I1207 21:38:22.162640   59120 start.go:128] duration metric: createHost completed in 27.567246586s
	I1207 21:38:22.162665   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:22.164850   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.165225   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.165246   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.165416   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:22.165597   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:22.165741   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:22.165890   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:22.166029   59120 main.go:141] libmachine: Using SSH client type: native
	I1207 21:38:22.166337   59120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1207 21:38:22.166348   59120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:38:22.278766   59120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701985102.263299096
	
	I1207 21:38:22.278787   59120 fix.go:206] guest clock: 1701985102.263299096
	I1207 21:38:22.278796   59120 fix.go:219] Guest: 2023-12-07 21:38:22.263299096 +0000 UTC Remote: 2023-12-07 21:38:22.16265209 +0000 UTC m=+27.689974053 (delta=100.647006ms)
	I1207 21:38:22.278819   59120 fix.go:190] guest clock delta is within tolerance: 100.647006ms
	I1207 21:38:22.278827   59120 start.go:83] releasing machines lock for "calico-715748", held for 27.683553009s
	I1207 21:38:22.278854   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:22.279123   59120 main.go:141] libmachine: (calico-715748) Calling .GetIP
	I1207 21:38:22.281844   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.282203   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.282227   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.282420   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:22.282904   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:22.283076   59120 main.go:141] libmachine: (calico-715748) Calling .DriverName
	I1207 21:38:22.283176   59120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:38:22.283208   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:22.283314   59120 ssh_runner.go:195] Run: cat /version.json
	I1207 21:38:22.283340   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHHostname
	I1207 21:38:22.285939   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.286134   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.286313   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.286346   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.286483   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:22.286506   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:22.286511   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:22.286723   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHPort
	I1207 21:38:22.286738   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:22.286874   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:22.286916   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHKeyPath
	I1207 21:38:22.287050   59120 main.go:141] libmachine: (calico-715748) Calling .GetSSHUsername
	I1207 21:38:22.287066   59120 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa Username:docker}
	I1207 21:38:22.287194   59120 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/calico-715748/id_rsa Username:docker}
	I1207 21:38:22.404801   59120 ssh_runner.go:195] Run: systemctl --version
	I1207 21:38:22.410609   59120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:38:22.567968   59120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:38:22.573733   59120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:38:22.573795   59120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:38:22.589243   59120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:38:22.589259   59120 start.go:475] detecting cgroup driver to use...
	I1207 21:38:22.589303   59120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:38:22.603428   59120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:38:22.615629   59120 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:38:22.615682   59120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:38:22.627721   59120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:38:22.640102   59120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:38:22.748421   59120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:38:22.873430   59120 docker.go:219] disabling docker service ...
	I1207 21:38:22.873496   59120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:38:22.887727   59120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:38:22.900202   59120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:38:23.016573   59120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:38:23.127568   59120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:38:23.141017   59120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:38:23.158480   59120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:38:23.158551   59120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:38:23.167561   59120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:38:23.167625   59120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:38:23.176937   59120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:38:23.186039   59120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:38:23.195134   59120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:38:23.204266   59120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:38:23.212321   59120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:38:23.212384   59120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:38:23.225935   59120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:38:23.236299   59120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:38:23.355653   59120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:38:23.553285   59120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:38:23.553375   59120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:38:23.558480   59120 start.go:543] Will wait 60s for crictl version
	I1207 21:38:23.558539   59120 ssh_runner.go:195] Run: which crictl
	I1207 21:38:23.562207   59120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:38:23.602737   59120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:38:23.602831   59120 ssh_runner.go:195] Run: crio --version
	I1207 21:38:23.652243   59120 ssh_runner.go:195] Run: crio --version
	I1207 21:38:23.704915   59120 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:38:23.706413   59120 main.go:141] libmachine: (calico-715748) Calling .GetIP
	I1207 21:38:23.708869   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:23.709163   59120 main.go:141] libmachine: (calico-715748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f9:ea", ip: ""} in network mk-calico-715748: {Iface:virbr2 ExpiryTime:2023-12-07 22:38:11 +0000 UTC Type:0 Mac:52:54:00:59:f9:ea Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:calico-715748 Clientid:01:52:54:00:59:f9:ea}
	I1207 21:38:23.709186   59120 main.go:141] libmachine: (calico-715748) DBG | domain calico-715748 has defined IP address 192.168.50.4 and MAC address 52:54:00:59:f9:ea in network mk-calico-715748
	I1207 21:38:23.709456   59120 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:38:23.713721   59120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:38:23.727152   59120 localpath.go:92] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.crt -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt
	I1207 21:38:23.727273   59120 localpath.go:117] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.key -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.key
	I1207 21:38:23.727362   59120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:38:23.727406   59120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:38:23.761065   59120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:38:23.761135   59120 ssh_runner.go:195] Run: which lz4
	I1207 21:38:23.765038   59120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:38:23.769423   59120 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:38:23.769471   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:38:25.648776   59120 crio.go:444] Took 1.883787 seconds to copy over tarball
	I1207 21:38:25.648848   59120 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:38:28.848905   59120 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.200029485s)
	I1207 21:38:28.848932   59120 crio.go:451] Took 3.200131 seconds to extract the tarball
	I1207 21:38:28.848944   59120 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:38:28.903589   59120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:38:28.985794   59120 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:38:28.985824   59120 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:38:28.985903   59120 ssh_runner.go:195] Run: crio config
	I1207 21:38:29.044659   59120 cni.go:84] Creating CNI manager for "calico"
	I1207 21:38:29.044691   59120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:38:29.044710   59120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-715748 NodeName:calico-715748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:38:29.044836   59120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-715748"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:38:29.044899   59120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=calico-715748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1207 21:38:29.044950   59120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:38:29.060049   59120 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:38:29.060131   59120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:38:29.070288   59120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1207 21:38:29.089276   59120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:38:29.107780   59120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1207 21:38:29.127282   59120 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I1207 21:38:29.132459   59120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:38:29.148808   59120 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748 for IP: 192.168.50.4
	I1207 21:38:29.148842   59120 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:29.149009   59120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:38:29.149064   59120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:38:29.149200   59120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.key
	I1207 21:38:29.149272   59120 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key.982e8d65
	I1207 21:38:29.149315   59120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt.982e8d65 with IP's: [192.168.50.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 21:38:29.406377   59120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt.982e8d65 ...
	I1207 21:38:29.406407   59120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt.982e8d65: {Name:mk0e94fa89572bf3c6d128bcf9fa3cd5edd959a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:29.406563   59120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key.982e8d65 ...
	I1207 21:38:29.406575   59120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key.982e8d65: {Name:mk15a3e766b1dff047b1b97c0ac50798dfea67a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:29.406641   59120 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt.982e8d65 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt
	I1207 21:38:29.406698   59120 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key.982e8d65 -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key
	I1207 21:38:29.406746   59120 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.key
	I1207 21:38:29.406758   59120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.crt with IP's: []
	I1207 21:38:29.903191   59120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.crt ...
	I1207 21:38:29.907817   59120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.crt: {Name:mkcc814dd0d6319eb5ad8a4351778085075992a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:29.908009   59120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.key ...
	I1207 21:38:29.908029   59120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.key: {Name:mk4cbe928e740488c05b04aee68ad330898bb91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:38:29.908264   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:38:29.908321   59120 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:38:29.908344   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:38:29.908382   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:38:29.908419   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:38:29.908450   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:38:29.908520   59120 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:38:29.909342   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:38:29.939828   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:38:29.971461   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:38:30.001274   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:38:30.027086   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:38:30.053872   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:38:30.077538   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:38:30.100224   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:38:30.123519   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:38:30.149137   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:38:30.173542   59120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:38:30.198530   59120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:38:30.218636   59120 ssh_runner.go:195] Run: openssl version
	I1207 21:38:30.226175   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:38:30.236199   59120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:38:30.241783   59120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:38:30.241835   59120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:38:30.248207   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:38:30.258248   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:38:30.267943   59120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:38:30.272281   59120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:38:30.272326   59120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:38:30.277356   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:38:30.286488   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:38:30.295892   59120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:38:30.300162   59120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:38:30.300210   59120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:38:30.305603   59120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:38:30.315157   59120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:38:30.319358   59120 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 21:38:30.319413   59120 kubeadm.go:404] StartCluster: {Name:calico-715748 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:calico-715748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:38:30.319504   59120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:38:30.319569   59120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:38:30.357405   59120 cri.go:89] found id: ""
	I1207 21:38:30.357480   59120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:38:30.366444   59120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:38:30.375036   59120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:38:30.384086   59120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:38:30.384125   59120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:38:30.434202   59120 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:38:30.434295   59120 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:38:30.578242   59120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:38:30.578388   59120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:38:30.578516   59120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:38:30.831804   59120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:16:17 UTC, ends at Thu 2023-12-07 21:38:31 UTC. --
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.761532133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=253c4783-51e2-45aa-a98b-2a2026d9e53b name=/runtime.v1.RuntimeService/Version
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.763230941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=025477bb-7cc1-4179-8f87-988346b51710 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.763906676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985111763884641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=025477bb-7cc1-4179-8f87-988346b51710 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.764734601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4c4ae98-064f-444e-a903-bb206c22efd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.764798923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4c4ae98-064f-444e-a903-bb206c22efd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.765073945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4c4ae98-064f-444e-a903-bb206c22efd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.809532851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0c693015-9608-4aba-95f9-6641d22506fe name=/runtime.v1.RuntimeService/Version
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.809668939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0c693015-9608-4aba-95f9-6641d22506fe name=/runtime.v1.RuntimeService/Version
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.810502222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a036d262-96ca-4e28-9d19-da5f7dd9893c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.810974918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985111810954156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a036d262-96ca-4e28-9d19-da5f7dd9893c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.811614507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a32cf5b3-daef-4b80-b62e-c1444d3b57db name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.811688951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a32cf5b3-daef-4b80-b62e-c1444d3b57db name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.811888663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a32cf5b3-daef-4b80-b62e-c1444d3b57db name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.848836699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cd2d1db0-2914-4229-9707-b8170dba7c1c name=/runtime.v1.RuntimeService/Version
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.848918815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cd2d1db0-2914-4229-9707-b8170dba7c1c name=/runtime.v1.RuntimeService/Version
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.850950294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8ad3dfb5-050f-4972-8c84-95fcd59d4043 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.851522695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701985111851502657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8ad3dfb5-050f-4972-8c84-95fcd59d4043 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.852414266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=593af552-5c9a-40a8-b9ad-96c13fedc930 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.852483884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=593af552-5c9a-40a8-b9ad-96c13fedc930 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.852748962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=593af552-5c9a-40a8-b9ad-96c13fedc930 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.883537792Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b3de3f42-a08c-4aa4-83e4-589d676c5a88 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.883983293Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-drrlk,Uid:abdd350f-1ec9-42f2-aac8-63015e2f22c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983830948080009,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056793786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&PodSandboxMetadata{Name:busybox,Uid:40929895-a56a-4b7c-8f5e-2bf0e8711984,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1701983830937785084,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056788836Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b56874d5505edee332f2ca542f4e1deb15c53c789076044bd4eee06efaf96660,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-qvq95,Uid:ff9eb289-7fe2-4d11-a369-12b1c34a1937,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983823133519723,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-qvq95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9eb289-7fe2-4d11-a369-12b1c34a1937,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07
T21:16:55.056798955Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&PodSandboxMetadata{Name:kube-proxy-nmx2z,Uid:1f466e5e-a6b2-4413-b456-7a90bc120735,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983815414917310,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f466e5e-a6b2-4413-b456-7a90bc120735,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-07T21:16:55.056797716Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983815408850629,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-12-07T21:16:55.056802081Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-275828,Uid:62ff3fe476a4d19df3c21e4eeff661f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807612468927,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.254:8444,kubernetes.io/config.hash: 62ff3fe476a4d19df3c21e4eeff661f5,kubernetes.io/config.seen: 2023-12-07T21:16:47.054749810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748
f0,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-275828,Uid:63722c9beb08c64e87aca0ac5a03a3b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807582806327,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.254:2379,kubernetes.io/config.hash: 63722c9beb08c64e87aca0ac5a03a3b3,kubernetes.io/config.seen: 2023-12-07T21:16:47.054741734Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-275828,Uid:d5571fbf22464376953aac83f089be6f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807574130885,Labels:map[string]str
ing{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d5571fbf22464376953aac83f089be6f,kubernetes.io/config.seen: 2023-12-07T21:16:47.054740046Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-275828,Uid:5048811c4e837144b51e5bb09fb52972,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701983807569892363,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5048811c4e837144b51e5bb09fb52972,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 5048811c4e837144b51e5bb09fb52972,kubernetes.io/config.seen: 2023-12-07T21:16:47.054733852Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b3de3f42-a08c-4aa4-83e4-589d676c5a88 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.885338936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6e8d2482-4555-4f21-9bcf-b9875ed3751c name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.885419701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6e8d2482-4555-4f21-9bcf-b9875ed3751c name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:38:31 default-k8s-diff-port-275828 crio[723]: time="2023-12-07 21:38:31.885684234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701983849340547526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf3f61d8578c5661844ba3a2252aba6bf9278a77f2fa9201d7f2c8d1555f9b6,PodSandboxId:3356df4ab45c2121d4528d873921db6267ea95f75792c6cdb9f6799aaf6f1c53,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701983834509768265,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 40929895-a56a-4b7c-8f5e-2bf0e8711984,},Annotations:map[string]string{io.kubernetes.container.hash: 428bfa41,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7,PodSandboxId:0a6c420b1a817c9e9ba9c1fc2ac08360f9bbdcf8b2b7cc04cedf26806b429d9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701983831750009289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-drrlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdd350f-1ec9-42f2-aac8-63015e2f22c2,},Annotations:map[string]string{io.kubernetes.container.hash: b307c476,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9,PodSandboxId:88d55f09318dbb4bb2faa009fb064007bc46be373bdbfcb3bb1904ab7811953d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701983817975704578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmx2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f466e5e-a6b2-4413-b456-7a90bc120735,},Annotations:map[string]string{io.kubernetes.container.hash: 76e83d38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e,PodSandboxId:e6ba8d1d11e8561b91c895c12615f1f35bfdc9aca4599de2490f0bd751c7f238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701983817998362097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
dc81a49-dc39-4d36-8d28-f7f3d6a8cab5,},Annotations:map[string]string{io.kubernetes.container.hash: 167a59b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc,PodSandboxId:6e5018b28ed3deba5fdc5ee96a4c6f1d2e58d929953e007c477c91c66e7748f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701983809318791800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63722c9beb08c64e87aca0ac5a03a3b3,},An
notations:map[string]string{io.kubernetes.container.hash: 5481d999,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4,PodSandboxId:08021fa635918b8aac0028f4cb560e3f3e7c4ab30f3270c499d764886d23144a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701983808926515135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5571fbf22464376953aac83f089be6f,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358,PodSandboxId:d061d98cecec543657c0a5cfcd5281c0a0b2b9b9f777ede392bd286600d4b1ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701983808285831614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ff3fe476a4d19df3c21e4eeff661f5,},An
notations:map[string]string{io.kubernetes.container.hash: 464e5f64,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c,PodSandboxId:e6cdd84c0e0b0df2e10217af3298e29e5ab61eda7863bf35c7bdfff025db6197,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701983808131844155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-275828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
048811c4e837144b51e5bb09fb52972,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6e8d2482-4555-4f21-9bcf-b9875ed3751c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d19830626a12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   e6ba8d1d11e85       storage-provisioner
	bcf3f61d8578c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   3356df4ab45c2       busybox
	5a99c774cf004       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   0a6c420b1a817       coredns-5dd5756b68-drrlk
	40b29d34e8a9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   e6ba8d1d11e85       storage-provisioner
	e5f03abdf541c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   88d55f09318db       kube-proxy-nmx2z
	333f8e7b3b0ba       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   6e5018b28ed3d       etcd-default-k8s-diff-port-275828
	3d55aee82d6e7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   08021fa635918       kube-scheduler-default-k8s-diff-port-275828
	0127dcb687572       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   d061d98cecec5       kube-apiserver-default-k8s-diff-port-275828
	2dfc84b682d89       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   e6cdd84c0e0b0       kube-controller-manager-default-k8s-diff-port-275828
	
	* 
	* ==> coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58782 - 47439 "HINFO IN 411158688030276708.324194747714498229. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010667086s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-275828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-275828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=default-k8s-diff-port-275828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_09_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:09:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-275828
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:38:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:37:49 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:37:49 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:37:49 +0000   Thu, 07 Dec 2023 21:09:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:37:49 +0000   Thu, 07 Dec 2023 21:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.254
	  Hostname:    default-k8s-diff-port-275828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 893c2f24b7204674972dc2ee75339e3b
	  System UUID:                893c2f24-b720-4674-972d-c2ee75339e3b
	  Boot ID:                    94a71f66-7149-4dfc-9904-3e6c7e919bc9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-drrlk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-275828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-275828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-275828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-nmx2z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-275828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-qvq95                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-275828 event: Registered Node default-k8s-diff-port-275828 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-275828 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-275828 event: Registered Node default-k8s-diff-port-275828 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070321] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.834581] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149351] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.544734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.875963] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.108835] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.163315] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.114013] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.242982] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +17.679029] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[Dec 7 21:17] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 7 21:37] hrtimer: interrupt took 11137244 ns
	
	* 
	* ==> etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] <==
	* {"level":"info","ts":"2023-12-07T21:26:52.912864Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":872,"took":"2.58114ms","hash":138780491}
	{"level":"info","ts":"2023-12-07T21:26:52.912986Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":138780491,"revision":872,"compact-revision":-1}
	{"level":"info","ts":"2023-12-07T21:31:52.917448Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2023-12-07T21:31:52.920703Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1114,"took":"2.378647ms","hash":2288014366}
	{"level":"info","ts":"2023-12-07T21:31:52.920855Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2288014366,"revision":1114,"compact-revision":872}
	{"level":"info","ts":"2023-12-07T21:35:35.173425Z","caller":"traceutil/trace.go:171","msg":"trace[826456651] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"199.412765ms","start":"2023-12-07T21:35:34.973912Z","end":"2023-12-07T21:35:35.173324Z","steps":["trace[826456651] 'process raft request'  (duration: 198.803853ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:35:49.360363Z","caller":"traceutil/trace.go:171","msg":"trace[2074250341] transaction","detail":"{read_only:false; response_revision:1550; number_of_response:1; }","duration":"102.625101ms","start":"2023-12-07T21:35:49.257724Z","end":"2023-12-07T21:35:49.360349Z","steps":["trace[2074250341] 'process raft request'  (duration: 102.503017ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:36:53.317317Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1357}
	{"level":"warn","ts":"2023-12-07T21:36:53.319172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.474375ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17233741156965577479 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:1357 > ","response":"size:5"}
	{"level":"info","ts":"2023-12-07T21:36:53.319401Z","caller":"traceutil/trace.go:171","msg":"trace[1791611148] compact","detail":"{revision:1357; response_revision:1600; }","duration":"376.184318ms","start":"2023-12-07T21:36:52.943179Z","end":"2023-12-07T21:36:53.319363Z","steps":["trace[1791611148] 'process raft request'  (duration: 123.88548ms)","trace[1791611148] 'check and update compact revision'  (duration: 249.35312ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:36:53.319493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:36:52.943162Z","time spent":"376.317096ms","remote":"127.0.0.1:39828","response type":"/etcdserverpb.KV/Compact","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-12-07T21:36:53.321204Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1357,"took":"2.335916ms","hash":2821339368}
	{"level":"info","ts":"2023-12-07T21:36:53.321282Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2821339368,"revision":1357,"compact-revision":1114}
	{"level":"info","ts":"2023-12-07T21:36:53.820117Z","caller":"traceutil/trace.go:171","msg":"trace[589261232] transaction","detail":"{read_only:false; response_revision:1601; number_of_response:1; }","duration":"102.345682ms","start":"2023-12-07T21:36:53.717753Z","end":"2023-12-07T21:36:53.820098Z","steps":["trace[589261232] 'process raft request'  (duration: 102.152704ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:36:54.036071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.037529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-07T21:36:54.03619Z","caller":"traceutil/trace.go:171","msg":"trace[1117232815] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1601; }","duration":"126.171806ms","start":"2023-12-07T21:36:53.909981Z","end":"2023-12-07T21:36:54.036153Z","steps":["trace[1117232815] 'range keys from in-memory index tree'  (duration: 125.610667ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:37:42.297549Z","caller":"traceutil/trace.go:171","msg":"trace[1012524012] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"180.097299ms","start":"2023-12-07T21:37:42.117414Z","end":"2023-12-07T21:37:42.297511Z","steps":["trace[1012524012] 'process raft request'  (duration: 179.923733ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:37:44.414845Z","caller":"traceutil/trace.go:171","msg":"trace[737142195] transaction","detail":"{read_only:false; response_revision:1642; number_of_response:1; }","duration":"108.310504ms","start":"2023-12-07T21:37:44.306517Z","end":"2023-12-07T21:37:44.414827Z","steps":["trace[737142195] 'process raft request'  (duration: 108.021111ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:38:26.152242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.899318ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17233741156965577944 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.254\" mod_revision:1668 > success:<request_put:<key:\"/registry/masterleases/192.168.39.254\" value_size:68 lease:8010369120110802134 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.254\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-07T21:38:26.152359Z","caller":"traceutil/trace.go:171","msg":"trace[168965860] transaction","detail":"{read_only:false; response_revision:1676; number_of_response:1; }","duration":"156.850374ms","start":"2023-12-07T21:38:25.995492Z","end":"2023-12-07T21:38:26.152342Z","steps":["trace[168965860] 'process raft request'  (duration: 31.709821ms)","trace[168965860] 'compare'  (duration: 124.731239ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:38:29.025532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.202376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-07T21:38:29.025769Z","caller":"traceutil/trace.go:171","msg":"trace[593707901] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1678; }","duration":"116.570337ms","start":"2023-12-07T21:38:28.90918Z","end":"2023-12-07T21:38:29.025751Z","steps":["trace[593707901] 'range keys from in-memory index tree'  (duration: 116.110027ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:38:29.406702Z","caller":"traceutil/trace.go:171","msg":"trace[1825317313] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"284.980698ms","start":"2023-12-07T21:38:29.121702Z","end":"2023-12-07T21:38:29.406683Z","steps":["trace[1825317313] 'process raft request'  (duration: 284.777897ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:38:30.677708Z","caller":"traceutil/trace.go:171","msg":"trace[2010370341] transaction","detail":"{read_only:false; response_revision:1680; number_of_response:1; }","duration":"252.895608ms","start":"2023-12-07T21:38:30.424792Z","end":"2023-12-07T21:38:30.677687Z","steps":["trace[2010370341] 'process raft request'  (duration: 252.444701ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T21:38:30.863854Z","caller":"traceutil/trace.go:171","msg":"trace[451925480] transaction","detail":"{read_only:false; response_revision:1681; number_of_response:1; }","duration":"168.315847ms","start":"2023-12-07T21:38:30.695522Z","end":"2023-12-07T21:38:30.863838Z","steps":["trace[451925480] 'process raft request'  (duration: 154.329308ms)","trace[451925480] 'compare'  (duration: 13.85985ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:38:32 up 22 min,  0 users,  load average: 0.05, 0.12, 0.15
	Linux default-k8s-diff-port-275828 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] <==
	* W1207 21:34:55.657510       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:34:55.657645       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:34:55.657680       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:35:54.534711       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1207 21:36:54.534994       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:36:54.660401       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:36:54.660536       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:36:54.661049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:36:55.660962       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:36:55.661033       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:36:55.661047       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:36:55.661195       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:36:55.661362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:36:55.662205       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:37:54.534796       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 21:37:55.661992       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:37:55.662062       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:37:55.662071       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:37:55.663269       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:37:55.663342       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:37:55.663349       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] <==
	* E1207 21:33:07.765909       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:08.296163       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:33:27.123246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="837.984µs"
	E1207 21:33:37.771152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:38.305856       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:33:39.116548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="140.079µs"
	E1207 21:34:07.777730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:08.314946       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:34:37.782387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:38.323284       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:07.793277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:08.332510       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:37.798946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:38.341510       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:36:07.804913       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:36:08.357224       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:36:37.811677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:36:38.365518       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:37:07.826067       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:37:08.377334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:37:37.831930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:37:38.390028       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:38:07.839689       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:38:08.401036       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:38:29.411732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="380µs"
	
	* 
	* ==> kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] <==
	* I1207 21:16:58.611922       1 server_others.go:69] "Using iptables proxy"
	I1207 21:16:58.658053       1 node.go:141] Successfully retrieved node IP: 192.168.39.254
	I1207 21:16:58.767484       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 21:16:58.767669       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:16:58.776064       1 server_others.go:152] "Using iptables Proxier"
	I1207 21:16:58.776175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:16:58.781493       1 server.go:846] "Version info" version="v1.28.4"
	I1207 21:16:58.781780       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:16:58.783148       1 config.go:188] "Starting service config controller"
	I1207 21:16:58.783992       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:16:58.784724       1 config.go:315] "Starting node config controller"
	I1207 21:16:58.791737       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:16:58.784918       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:16:58.795071       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:16:58.795213       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:16:58.884761       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:16:58.891921       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] <==
	* W1207 21:16:54.669452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.669497       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.669643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 21:16:54.669689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 21:16:54.669869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:16:54.669927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:16:54.669889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:16:54.669978       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:16:54.670126       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.670169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.670261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:16:54.673804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.673856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 21:16:54.673963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:16:54.673997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 21:16:54.674079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:16:54.674115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 21:16:54.674190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:16:54.674229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:16:54.674271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:16:54.674305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:16:54.673800       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1207 21:16:54.674516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 21:16:54.674653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1207 21:16:55.653316       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:16:17 UTC, ends at Thu 2023-12-07 21:38:32 UTC. --
	Dec 07 21:36:00 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:00.099679     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:36:12 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:12.100215     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:36:25 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:25.100946     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:36:37 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:37.100534     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:36:47 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:47.115532     928 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:36:47 default-k8s-diff-port-275828 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:36:47 default-k8s-diff-port-275828 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:36:47 default-k8s-diff-port-275828 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:36:47 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:47.125903     928 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 07 21:36:48 default-k8s-diff-port-275828 kubelet[928]: E1207 21:36:48.100063     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:37:01 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:01.102847     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:37:16 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:16.099963     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:37:27 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:27.099239     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:37:39 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:39.100916     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:37:47 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:47.118037     928 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:37:47 default-k8s-diff-port-275828 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:37:47 default-k8s-diff-port-275828 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:37:47 default-k8s-diff-port-275828 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:37:51 default-k8s-diff-port-275828 kubelet[928]: E1207 21:37:51.099859     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:38:06 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:06.099284     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:38:18 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:18.122705     928 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:38:18 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:18.122784     928 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 07 21:38:18 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:18.122993     928 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l9pgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-qvq95_kube-system(ff9eb289-7fe2-4d11-a369-12b1c34a1937): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:38:18 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:18.123028     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	Dec 07 21:38:29 default-k8s-diff-port-275828 kubelet[928]: E1207 21:38:29.101260     928 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qvq95" podUID="ff9eb289-7fe2-4d11-a369-12b1c34a1937"
	
	* 
	* ==> storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] <==
	* I1207 21:16:58.341252       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 21:17:28.344919       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] <==
	* I1207 21:17:29.449360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:17:29.464945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:17:29.465084       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:17:46.870294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:17:46.870848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77fef67e-71b0-4413-86fa-eb3e04ca573f", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76 became leader
	I1207 21:17:46.870914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76!
	I1207 21:17:46.971949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-275828_73d74afe-a48d-4e7a-a97d-cdb8f6434c76!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qvq95
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95: exit status 1 (70.518954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qvq95" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-275828 describe pod metrics-server-57f55c9bc5-qvq95: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:30:51.989714   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 21:31:05.939079   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:31:41.699524   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950431 -n no-preload-950431
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:36:15.038729597 +0000 UTC m=+5699.206874567
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-950431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-950431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.375µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-950431 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-950431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-950431 logs -n 25: (1.456971335s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:35 UTC | 07 Dec 23 21:35 UTC |
	| start   | -p newest-cni-155321 --memory=2200 --alsologtostderr   | newest-cni-155321            | jenkins | v1.32.0 | 07 Dec 23 21:35 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:35:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:35:17.735497   56329 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:35:17.735625   56329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:35:17.735635   56329 out.go:309] Setting ErrFile to fd 2...
	I1207 21:35:17.735639   56329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:35:17.735909   56329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:35:17.736513   56329 out.go:303] Setting JSON to false
	I1207 21:35:17.737544   56329 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8264,"bootTime":1701976654,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:35:17.737611   56329 start.go:138] virtualization: kvm guest
	I1207 21:35:17.740139   56329 out.go:177] * [newest-cni-155321] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:35:17.741689   56329 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:35:17.743074   56329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:35:17.741684   56329 notify.go:220] Checking for updates...
	I1207 21:35:17.744593   56329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:35:17.746033   56329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:35:17.747387   56329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:35:17.748682   56329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:35:17.750648   56329 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:35:17.750747   56329 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:35:17.750837   56329 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:35:17.750909   56329 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:35:17.789485   56329 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:35:17.790803   56329 start.go:298] selected driver: kvm2
	I1207 21:35:17.790818   56329 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:35:17.790828   56329 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:35:17.791540   56329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:35:17.791613   56329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:35:17.808190   56329 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:35:17.808248   56329 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1207 21:35:17.808268   56329 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1207 21:35:17.808485   56329 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 21:35:17.808546   56329 cni.go:84] Creating CNI manager for ""
	I1207 21:35:17.808559   56329 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:35:17.808571   56329 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 21:35:17.808584   56329 start_flags.go:323] config:
	{Name:newest-cni-155321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-155321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:35:17.808720   56329 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:35:17.810486   56329 out.go:177] * Starting control plane node newest-cni-155321 in cluster newest-cni-155321
	I1207 21:35:17.811813   56329 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:35:17.811848   56329 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1207 21:35:17.811861   56329 cache.go:56] Caching tarball of preloaded images
	I1207 21:35:17.811927   56329 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:35:17.811936   56329 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1207 21:35:17.812038   56329 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/config.json ...
	I1207 21:35:17.812056   56329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/config.json: {Name:mk8127cea3dbd90ec9280c5dd50f897a6b377040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:17.812174   56329 start.go:365] acquiring machines lock for newest-cni-155321: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:35:17.812200   56329 start.go:369] acquired machines lock for "newest-cni-155321" in 14.045µs
	I1207 21:35:17.812216   56329 start.go:93] Provisioning new machine with config: &{Name:newest-cni-155321 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-155321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:35:17.812285   56329 start.go:125] createHost starting for "" (driver="kvm2")
	I1207 21:35:17.814059   56329 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 21:35:17.814178   56329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:35:17.814219   56329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:35:17.829276   56329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1207 21:35:17.829811   56329 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:35:17.830408   56329 main.go:141] libmachine: Using API Version  1
	I1207 21:35:17.830430   56329 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:35:17.830804   56329 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:35:17.830986   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetMachineName
	I1207 21:35:17.831147   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:17.831284   56329 start.go:159] libmachine.API.Create for "newest-cni-155321" (driver="kvm2")
	I1207 21:35:17.831320   56329 client.go:168] LocalClient.Create starting
	I1207 21:35:17.831356   56329 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem
	I1207 21:35:17.831410   56329 main.go:141] libmachine: Decoding PEM data...
	I1207 21:35:17.831445   56329 main.go:141] libmachine: Parsing certificate...
	I1207 21:35:17.831515   56329 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem
	I1207 21:35:17.831551   56329 main.go:141] libmachine: Decoding PEM data...
	I1207 21:35:17.831572   56329 main.go:141] libmachine: Parsing certificate...
	I1207 21:35:17.831593   56329 main.go:141] libmachine: Running pre-create checks...
	I1207 21:35:17.831632   56329 main.go:141] libmachine: (newest-cni-155321) Calling .PreCreateCheck
	I1207 21:35:17.831983   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetConfigRaw
	I1207 21:35:17.832391   56329 main.go:141] libmachine: Creating machine...
	I1207 21:35:17.832406   56329 main.go:141] libmachine: (newest-cni-155321) Calling .Create
	I1207 21:35:17.832519   56329 main.go:141] libmachine: (newest-cni-155321) Creating KVM machine...
	I1207 21:35:17.833862   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found existing default KVM network
	I1207 21:35:17.835213   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:17.835041   56351 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:d9:97} reservation:<nil>}
	I1207 21:35:17.835942   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:17.835845   56351 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:c6:79} reservation:<nil>}
	I1207 21:35:17.836971   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:17.836898   56351 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030c800}
	I1207 21:35:17.842737   56329 main.go:141] libmachine: (newest-cni-155321) DBG | trying to create private KVM network mk-newest-cni-155321 192.168.61.0/24...
	I1207 21:35:17.923449   56329 main.go:141] libmachine: (newest-cni-155321) DBG | private KVM network mk-newest-cni-155321 192.168.61.0/24 created
	I1207 21:35:17.923497   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:17.923389   56351 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:35:17.923521   56329 main.go:141] libmachine: (newest-cni-155321) Setting up store path in /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321 ...
	I1207 21:35:17.923549   56329 main.go:141] libmachine: (newest-cni-155321) Building disk image from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 21:35:17.923572   56329 main.go:141] libmachine: (newest-cni-155321) Downloading /home/jenkins/minikube-integration/17719-9628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso...
	I1207 21:35:18.136275   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:18.136104   56351 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa...
	I1207 21:35:18.358298   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:18.358156   56351 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/newest-cni-155321.rawdisk...
	I1207 21:35:18.358339   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Writing magic tar header
	I1207 21:35:18.358362   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Writing SSH key tar header
	I1207 21:35:18.358383   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:18.358302   56351 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321 ...
	I1207 21:35:18.358511   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321
	I1207 21:35:18.358548   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube/machines
	I1207 21:35:18.358568   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321 (perms=drwx------)
	I1207 21:35:18.358583   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:35:18.358598   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17719-9628
	I1207 21:35:18.358615   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1207 21:35:18.358626   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube/machines (perms=drwxr-xr-x)
	I1207 21:35:18.358640   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628/.minikube (perms=drwxr-xr-x)
	I1207 21:35:18.358654   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins/minikube-integration/17719-9628 (perms=drwxrwxr-x)
	I1207 21:35:18.358666   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home/jenkins
	I1207 21:35:18.358682   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Checking permissions on dir: /home
	I1207 21:35:18.358694   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Skipping /home - not owner
	I1207 21:35:18.358706   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1207 21:35:18.358757   56329 main.go:141] libmachine: (newest-cni-155321) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1207 21:35:18.358794   56329 main.go:141] libmachine: (newest-cni-155321) Creating domain...
	I1207 21:35:18.359807   56329 main.go:141] libmachine: (newest-cni-155321) define libvirt domain using xml: 
	I1207 21:35:18.359830   56329 main.go:141] libmachine: (newest-cni-155321) <domain type='kvm'>
	I1207 21:35:18.359842   56329 main.go:141] libmachine: (newest-cni-155321)   <name>newest-cni-155321</name>
	I1207 21:35:18.359858   56329 main.go:141] libmachine: (newest-cni-155321)   <memory unit='MiB'>2200</memory>
	I1207 21:35:18.359873   56329 main.go:141] libmachine: (newest-cni-155321)   <vcpu>2</vcpu>
	I1207 21:35:18.359899   56329 main.go:141] libmachine: (newest-cni-155321)   <features>
	I1207 21:35:18.359914   56329 main.go:141] libmachine: (newest-cni-155321)     <acpi/>
	I1207 21:35:18.359925   56329 main.go:141] libmachine: (newest-cni-155321)     <apic/>
	I1207 21:35:18.359937   56329 main.go:141] libmachine: (newest-cni-155321)     <pae/>
	I1207 21:35:18.359951   56329 main.go:141] libmachine: (newest-cni-155321)     
	I1207 21:35:18.359962   56329 main.go:141] libmachine: (newest-cni-155321)   </features>
	I1207 21:35:18.359976   56329 main.go:141] libmachine: (newest-cni-155321)   <cpu mode='host-passthrough'>
	I1207 21:35:18.359983   56329 main.go:141] libmachine: (newest-cni-155321)   
	I1207 21:35:18.359992   56329 main.go:141] libmachine: (newest-cni-155321)   </cpu>
	I1207 21:35:18.359999   56329 main.go:141] libmachine: (newest-cni-155321)   <os>
	I1207 21:35:18.360006   56329 main.go:141] libmachine: (newest-cni-155321)     <type>hvm</type>
	I1207 21:35:18.360020   56329 main.go:141] libmachine: (newest-cni-155321)     <boot dev='cdrom'/>
	I1207 21:35:18.360034   56329 main.go:141] libmachine: (newest-cni-155321)     <boot dev='hd'/>
	I1207 21:35:18.360047   56329 main.go:141] libmachine: (newest-cni-155321)     <bootmenu enable='no'/>
	I1207 21:35:18.360058   56329 main.go:141] libmachine: (newest-cni-155321)   </os>
	I1207 21:35:18.360069   56329 main.go:141] libmachine: (newest-cni-155321)   <devices>
	I1207 21:35:18.360084   56329 main.go:141] libmachine: (newest-cni-155321)     <disk type='file' device='cdrom'>
	I1207 21:35:18.360110   56329 main.go:141] libmachine: (newest-cni-155321)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/boot2docker.iso'/>
	I1207 21:35:18.360142   56329 main.go:141] libmachine: (newest-cni-155321)       <target dev='hdc' bus='scsi'/>
	I1207 21:35:18.360156   56329 main.go:141] libmachine: (newest-cni-155321)       <readonly/>
	I1207 21:35:18.360165   56329 main.go:141] libmachine: (newest-cni-155321)     </disk>
	I1207 21:35:18.360180   56329 main.go:141] libmachine: (newest-cni-155321)     <disk type='file' device='disk'>
	I1207 21:35:18.360193   56329 main.go:141] libmachine: (newest-cni-155321)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1207 21:35:18.360211   56329 main.go:141] libmachine: (newest-cni-155321)       <source file='/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/newest-cni-155321.rawdisk'/>
	I1207 21:35:18.360223   56329 main.go:141] libmachine: (newest-cni-155321)       <target dev='hda' bus='virtio'/>
	I1207 21:35:18.360235   56329 main.go:141] libmachine: (newest-cni-155321)     </disk>
	I1207 21:35:18.360250   56329 main.go:141] libmachine: (newest-cni-155321)     <interface type='network'>
	I1207 21:35:18.360270   56329 main.go:141] libmachine: (newest-cni-155321)       <source network='mk-newest-cni-155321'/>
	I1207 21:35:18.360282   56329 main.go:141] libmachine: (newest-cni-155321)       <model type='virtio'/>
	I1207 21:35:18.360310   56329 main.go:141] libmachine: (newest-cni-155321)     </interface>
	I1207 21:35:18.360330   56329 main.go:141] libmachine: (newest-cni-155321)     <interface type='network'>
	I1207 21:35:18.360342   56329 main.go:141] libmachine: (newest-cni-155321)       <source network='default'/>
	I1207 21:35:18.360356   56329 main.go:141] libmachine: (newest-cni-155321)       <model type='virtio'/>
	I1207 21:35:18.360370   56329 main.go:141] libmachine: (newest-cni-155321)     </interface>
	I1207 21:35:18.360385   56329 main.go:141] libmachine: (newest-cni-155321)     <serial type='pty'>
	I1207 21:35:18.360394   56329 main.go:141] libmachine: (newest-cni-155321)       <target port='0'/>
	I1207 21:35:18.360399   56329 main.go:141] libmachine: (newest-cni-155321)     </serial>
	I1207 21:35:18.360412   56329 main.go:141] libmachine: (newest-cni-155321)     <console type='pty'>
	I1207 21:35:18.360429   56329 main.go:141] libmachine: (newest-cni-155321)       <target type='serial' port='0'/>
	I1207 21:35:18.360439   56329 main.go:141] libmachine: (newest-cni-155321)     </console>
	I1207 21:35:18.360451   56329 main.go:141] libmachine: (newest-cni-155321)     <rng model='virtio'>
	I1207 21:35:18.360464   56329 main.go:141] libmachine: (newest-cni-155321)       <backend model='random'>/dev/random</backend>
	I1207 21:35:18.360475   56329 main.go:141] libmachine: (newest-cni-155321)     </rng>
	I1207 21:35:18.360484   56329 main.go:141] libmachine: (newest-cni-155321)     
	I1207 21:35:18.360495   56329 main.go:141] libmachine: (newest-cni-155321)     
	I1207 21:35:18.360507   56329 main.go:141] libmachine: (newest-cni-155321)   </devices>
	I1207 21:35:18.360517   56329 main.go:141] libmachine: (newest-cni-155321) </domain>
	I1207 21:35:18.360532   56329 main.go:141] libmachine: (newest-cni-155321) 
	I1207 21:35:18.365205   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:d8:bf:bb in network default
	I1207 21:35:18.365814   56329 main.go:141] libmachine: (newest-cni-155321) Ensuring networks are active...
	I1207 21:35:18.365837   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:18.366536   56329 main.go:141] libmachine: (newest-cni-155321) Ensuring network default is active
	I1207 21:35:18.366808   56329 main.go:141] libmachine: (newest-cni-155321) Ensuring network mk-newest-cni-155321 is active
	I1207 21:35:18.367318   56329 main.go:141] libmachine: (newest-cni-155321) Getting domain xml...
	I1207 21:35:18.368119   56329 main.go:141] libmachine: (newest-cni-155321) Creating domain...
	I1207 21:35:19.648095   56329 main.go:141] libmachine: (newest-cni-155321) Waiting to get IP...
	I1207 21:35:19.649046   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:19.649496   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:19.649519   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:19.649476   56351 retry.go:31] will retry after 215.617843ms: waiting for machine to come up
	I1207 21:35:19.866967   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:19.867523   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:19.867564   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:19.867472   56351 retry.go:31] will retry after 264.056677ms: waiting for machine to come up
	I1207 21:35:20.132703   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:20.133144   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:20.133173   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:20.133109   56351 retry.go:31] will retry after 356.439275ms: waiting for machine to come up
	I1207 21:35:20.491517   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:20.492039   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:20.492061   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:20.491993   56351 retry.go:31] will retry after 381.650752ms: waiting for machine to come up
	I1207 21:35:20.875522   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:20.875946   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:20.875980   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:20.875901   56351 retry.go:31] will retry after 748.853564ms: waiting for machine to come up
	I1207 21:35:21.627025   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:21.627513   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:21.627546   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:21.627445   56351 retry.go:31] will retry after 914.296662ms: waiting for machine to come up
	I1207 21:35:22.543172   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:22.543590   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:22.543619   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:22.543540   56351 retry.go:31] will retry after 986.335661ms: waiting for machine to come up
	I1207 21:35:23.531079   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:23.531603   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:23.531635   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:23.531556   56351 retry.go:31] will retry after 1.102422514s: waiting for machine to come up
	I1207 21:35:24.634925   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:24.635325   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:24.635374   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:24.635290   56351 retry.go:31] will retry after 1.179269829s: waiting for machine to come up
	I1207 21:35:25.815765   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:25.816274   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:25.816307   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:25.816215   56351 retry.go:31] will retry after 1.806867145s: waiting for machine to come up
	I1207 21:35:27.624962   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:27.625461   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:27.625487   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:27.625425   56351 retry.go:31] will retry after 1.946848154s: waiting for machine to come up
	I1207 21:35:29.574150   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:29.574743   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:29.574777   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:29.574696   56351 retry.go:31] will retry after 2.498047541s: waiting for machine to come up
	I1207 21:35:32.075349   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:32.075920   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:32.075950   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:32.075879   56351 retry.go:31] will retry after 3.618097471s: waiting for machine to come up
	I1207 21:35:35.696056   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:35.696504   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find current IP address of domain newest-cni-155321 in network mk-newest-cni-155321
	I1207 21:35:35.696533   56329 main.go:141] libmachine: (newest-cni-155321) DBG | I1207 21:35:35.696469   56351 retry.go:31] will retry after 5.25968167s: waiting for machine to come up
	I1207 21:35:40.960043   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:40.960839   56329 main.go:141] libmachine: (newest-cni-155321) Found IP for machine: 192.168.61.117
	I1207 21:35:40.960866   56329 main.go:141] libmachine: (newest-cni-155321) Reserving static IP address...
	I1207 21:35:40.960903   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has current primary IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:40.961333   56329 main.go:141] libmachine: (newest-cni-155321) DBG | unable to find host DHCP lease matching {name: "newest-cni-155321", mac: "52:54:00:2a:19:1b", ip: "192.168.61.117"} in network mk-newest-cni-155321
	I1207 21:35:41.037943   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Getting to WaitForSSH function...
	I1207 21:35:41.037976   56329 main.go:141] libmachine: (newest-cni-155321) Reserved static IP address: 192.168.61.117
	I1207 21:35:41.037990   56329 main.go:141] libmachine: (newest-cni-155321) Waiting for SSH to be available...
	I1207 21:35:41.040741   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.041195   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.041233   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.041409   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Using SSH client type: external
	I1207 21:35:41.041430   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa (-rw-------)
	I1207 21:35:41.041463   56329 main.go:141] libmachine: (newest-cni-155321) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:35:41.041479   56329 main.go:141] libmachine: (newest-cni-155321) DBG | About to run SSH command:
	I1207 21:35:41.041498   56329 main.go:141] libmachine: (newest-cni-155321) DBG | exit 0
	I1207 21:35:41.133777   56329 main.go:141] libmachine: (newest-cni-155321) DBG | SSH cmd err, output: <nil>: 
	I1207 21:35:41.134084   56329 main.go:141] libmachine: (newest-cni-155321) KVM machine creation complete!
	I1207 21:35:41.134449   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetConfigRaw
	I1207 21:35:41.134951   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:41.135165   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:41.135324   56329 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1207 21:35:41.135342   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetState
	I1207 21:35:41.136609   56329 main.go:141] libmachine: Detecting operating system of created instance...
	I1207 21:35:41.136624   56329 main.go:141] libmachine: Waiting for SSH to be available...
	I1207 21:35:41.136630   56329 main.go:141] libmachine: Getting to WaitForSSH function...
	I1207 21:35:41.136637   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.139035   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.139325   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.139357   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.139471   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:41.139657   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.139824   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.139984   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:41.140144   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:41.140546   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:41.140563   56329 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1207 21:35:41.269523   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:35:41.269550   56329 main.go:141] libmachine: Detecting the provisioner...
	I1207 21:35:41.269562   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.272395   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.272756   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.272781   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.272936   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:41.273150   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.273326   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.273518   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:41.273688   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:41.274186   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:41.274207   56329 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1207 21:35:41.403089   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2b7375-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1207 21:35:41.403175   56329 main.go:141] libmachine: found compatible host: buildroot
	I1207 21:35:41.403190   56329 main.go:141] libmachine: Provisioning with buildroot...
	I1207 21:35:41.403203   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetMachineName
	I1207 21:35:41.403453   56329 buildroot.go:166] provisioning hostname "newest-cni-155321"
	I1207 21:35:41.403480   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetMachineName
	I1207 21:35:41.403656   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.406262   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.406670   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.406696   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.406833   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:41.407046   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.407252   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.407400   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:41.407563   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:41.408011   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:41.408033   56329 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-155321 && echo "newest-cni-155321" | sudo tee /etc/hostname
	I1207 21:35:41.556003   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-155321
	
	I1207 21:35:41.556036   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.558988   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.559480   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.559519   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.559682   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:41.559890   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.560063   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.560204   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:41.560407   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:41.560760   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:41.560781   56329 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-155321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-155321/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-155321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:35:41.698608   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:35:41.698635   56329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:35:41.698687   56329 buildroot.go:174] setting up certificates
	I1207 21:35:41.698699   56329 provision.go:83] configureAuth start
	I1207 21:35:41.698714   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetMachineName
	I1207 21:35:41.698998   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetIP
	I1207 21:35:41.701892   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.702332   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.702373   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.702544   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.704630   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.704923   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.704950   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.705098   56329 provision.go:138] copyHostCerts
	I1207 21:35:41.705160   56329 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:35:41.705173   56329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:35:41.705251   56329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:35:41.705355   56329 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:35:41.705368   56329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:35:41.705402   56329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:35:41.705471   56329 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:35:41.705481   56329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:35:41.705510   56329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:35:41.705585   56329 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.newest-cni-155321 san=[192.168.61.117 192.168.61.117 localhost 127.0.0.1 minikube newest-cni-155321]
	I1207 21:35:41.841328   56329 provision.go:172] copyRemoteCerts
	I1207 21:35:41.841391   56329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:35:41.841415   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:41.844130   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.844567   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:41.844599   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:41.844817   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:41.845045   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:41.845205   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:41.845346   56329 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa Username:docker}
	I1207 21:35:41.943261   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:35:41.967886   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:35:41.993356   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:35:42.017895   56329 provision.go:86] duration metric: configureAuth took 319.181627ms
	I1207 21:35:42.017943   56329 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:35:42.018164   56329 config.go:182] Loaded profile config "newest-cni-155321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:35:42.018256   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:42.021094   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.021437   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.021466   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.021609   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:42.021814   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.021977   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.022161   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:42.022343   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:42.022776   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:42.022802   56329 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:35:42.364745   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:35:42.364792   56329 main.go:141] libmachine: Checking connection to Docker...
	I1207 21:35:42.364805   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetURL
	I1207 21:35:42.366345   56329 main.go:141] libmachine: (newest-cni-155321) DBG | Using libvirt version 6000000
	I1207 21:35:42.368679   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.369022   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.369052   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.369263   56329 main.go:141] libmachine: Docker is up and running!
	I1207 21:35:42.369280   56329 main.go:141] libmachine: Reticulating splines...
	I1207 21:35:42.369286   56329 client.go:171] LocalClient.Create took 24.537956666s
	I1207 21:35:42.369312   56329 start.go:167] duration metric: libmachine.API.Create for "newest-cni-155321" took 24.538027474s
	I1207 21:35:42.369325   56329 start.go:300] post-start starting for "newest-cni-155321" (driver="kvm2")
	I1207 21:35:42.369339   56329 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:35:42.369360   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:42.369613   56329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:35:42.369639   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:42.371833   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.372179   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.372216   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.372328   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:42.372525   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.372704   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:42.372881   56329 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa Username:docker}
	I1207 21:35:42.469517   56329 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:35:42.474385   56329 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:35:42.474412   56329 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:35:42.474490   56329 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:35:42.474582   56329 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:35:42.474676   56329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:35:42.485205   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:35:42.508817   56329 start.go:303] post-start completed in 139.476722ms
	I1207 21:35:42.508873   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetConfigRaw
	I1207 21:35:42.509437   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetIP
	I1207 21:35:42.511995   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.512295   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.512325   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.512645   56329 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/config.json ...
	I1207 21:35:42.512856   56329 start.go:128] duration metric: createHost completed in 24.700561196s
	I1207 21:35:42.512879   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:42.515084   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.515413   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.515428   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.515597   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:42.515763   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.515927   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.516087   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:42.516267   56329 main.go:141] libmachine: Using SSH client type: native
	I1207 21:35:42.516624   56329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I1207 21:35:42.516638   56329 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:35:42.642805   56329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701984942.622547074
	
	I1207 21:35:42.642834   56329 fix.go:206] guest clock: 1701984942.622547074
	I1207 21:35:42.642843   56329 fix.go:219] Guest: 2023-12-07 21:35:42.622547074 +0000 UTC Remote: 2023-12-07 21:35:42.512867756 +0000 UTC m=+24.832659388 (delta=109.679318ms)
	I1207 21:35:42.642869   56329 fix.go:190] guest clock delta is within tolerance: 109.679318ms
	I1207 21:35:42.642875   56329 start.go:83] releasing machines lock for "newest-cni-155321", held for 24.830665649s
	I1207 21:35:42.642900   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:42.643174   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetIP
	I1207 21:35:42.646062   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.646392   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.646424   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.646587   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:42.647060   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:42.647235   56329 main.go:141] libmachine: (newest-cni-155321) Calling .DriverName
	I1207 21:35:42.647344   56329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:35:42.647390   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:42.647508   56329 ssh_runner.go:195] Run: cat /version.json
	I1207 21:35:42.647533   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHHostname
	I1207 21:35:42.650226   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.650420   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.650619   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.650648   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.650889   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:42.650891   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:42.650921   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:42.651055   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.651114   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHPort
	I1207 21:35:42.651202   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:42.651277   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHKeyPath
	I1207 21:35:42.651344   56329 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa Username:docker}
	I1207 21:35:42.651459   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetSSHUsername
	I1207 21:35:42.651632   56329 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/newest-cni-155321/id_rsa Username:docker}
	I1207 21:35:42.740230   56329 ssh_runner.go:195] Run: systemctl --version
	I1207 21:35:42.765251   56329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:35:42.926339   56329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:35:42.932153   56329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:35:42.932241   56329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:35:42.947343   56329 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:35:42.947367   56329 start.go:475] detecting cgroup driver to use...
	I1207 21:35:42.947432   56329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:35:42.963237   56329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:35:42.978654   56329 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:35:42.978717   56329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:35:42.992103   56329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:35:43.005068   56329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:35:43.110196   56329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:35:43.250661   56329 docker.go:219] disabling docker service ...
	I1207 21:35:43.250734   56329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:35:43.264808   56329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:35:43.276643   56329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:35:43.394612   56329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:35:43.522238   56329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:35:43.536195   56329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:35:43.554059   56329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:35:43.554127   56329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:35:43.563784   56329 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:35:43.563851   56329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:35:43.574081   56329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:35:43.584028   56329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:35:43.593181   56329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:35:43.602719   56329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:35:43.611271   56329 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:35:43.611331   56329 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:35:43.623559   56329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:35:43.632969   56329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:35:43.754645   56329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:35:43.947326   56329 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:35:43.947410   56329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:35:43.952594   56329 start.go:543] Will wait 60s for crictl version
	I1207 21:35:43.952657   56329 ssh_runner.go:195] Run: which crictl
	I1207 21:35:43.957533   56329 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:35:44.004081   56329 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:35:44.004154   56329 ssh_runner.go:195] Run: crio --version
	I1207 21:35:44.053690   56329 ssh_runner.go:195] Run: crio --version
	I1207 21:35:44.107279   56329 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:35:44.108861   56329 main.go:141] libmachine: (newest-cni-155321) Calling .GetIP
	I1207 21:35:44.111745   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:44.112122   56329 main.go:141] libmachine: (newest-cni-155321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:19:1b", ip: ""} in network mk-newest-cni-155321: {Iface:virbr3 ExpiryTime:2023-12-07 22:35:34 +0000 UTC Type:0 Mac:52:54:00:2a:19:1b Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:newest-cni-155321 Clientid:01:52:54:00:2a:19:1b}
	I1207 21:35:44.112160   56329 main.go:141] libmachine: (newest-cni-155321) DBG | domain newest-cni-155321 has defined IP address 192.168.61.117 and MAC address 52:54:00:2a:19:1b in network mk-newest-cni-155321
	I1207 21:35:44.112432   56329 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:35:44.116805   56329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:35:44.128796   56329 localpath.go:92] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.crt -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/client.crt
	I1207 21:35:44.128956   56329 localpath.go:117] copying /home/jenkins/minikube-integration/17719-9628/.minikube/client.key -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/client.key
	I1207 21:35:44.131005   56329 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 21:35:44.132423   56329 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:35:44.132502   56329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:35:44.173434   56329 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:35:44.173494   56329 ssh_runner.go:195] Run: which lz4
	I1207 21:35:44.177826   56329 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:35:44.182751   56329 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:35:44.182773   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401677649 bytes)
	I1207 21:35:45.771782   56329 crio.go:444] Took 1.593992 seconds to copy over tarball
	I1207 21:35:45.771841   56329 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:35:48.572190   56329 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.800319003s)
	I1207 21:35:48.572225   56329 crio.go:451] Took 2.800421 seconds to extract the tarball
	I1207 21:35:48.572236   56329 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:35:48.612168   56329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:35:48.697128   56329 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:35:48.697151   56329 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:35:48.697223   56329 ssh_runner.go:195] Run: crio config
	I1207 21:35:48.757462   56329 cni.go:84] Creating CNI manager for ""
	I1207 21:35:48.757483   56329 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:35:48.757505   56329 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1207 21:35:48.757528   56329 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.117 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-155321 NodeName:newest-cni-155321 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:35:48.757660   56329 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-155321"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:35:48.757742   56329 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-155321 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-155321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:35:48.757812   56329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:35:48.769172   56329 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:35:48.769254   56329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:35:48.780196   56329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1207 21:35:48.796160   56329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:35:48.811681   56329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1207 21:35:48.830832   56329 ssh_runner.go:195] Run: grep 192.168.61.117	control-plane.minikube.internal$ /etc/hosts
	I1207 21:35:48.835145   56329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:35:48.850282   56329 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321 for IP: 192.168.61.117
	I1207 21:35:48.850319   56329 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:48.850511   56329 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:35:48.850561   56329 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:35:48.850660   56329 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/client.key
	I1207 21:35:48.850692   56329 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key.1784b7dd
	I1207 21:35:48.850709   56329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt.1784b7dd with IP's: [192.168.61.117 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 21:35:49.030862   56329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt.1784b7dd ...
	I1207 21:35:49.030896   56329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt.1784b7dd: {Name:mke10ce330c1ebb7a0c442192e6621aa741a3032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:49.031082   56329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key.1784b7dd ...
	I1207 21:35:49.031098   56329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key.1784b7dd: {Name:mk4aa982063272bc37f5ff9538b6e4f3c29bc932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:49.031198   56329 certs.go:337] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt.1784b7dd -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt
	I1207 21:35:49.031288   56329 certs.go:341] copying /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key.1784b7dd -> /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key
	I1207 21:35:49.031341   56329 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.key
	I1207 21:35:49.031370   56329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.crt with IP's: []
	I1207 21:35:49.371840   56329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.crt ...
	I1207 21:35:49.371868   56329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.crt: {Name:mk0f004f961b0ed589099db65bfb83e0306f6352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:49.372065   56329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.key ...
	I1207 21:35:49.372080   56329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.key: {Name:mkb1674b3bef59bbb8a3525315ec03bd2c381a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:35:49.372276   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:35:49.372314   56329 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:35:49.372325   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:35:49.372351   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:35:49.372372   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:35:49.372397   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:35:49.372432   56329 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:35:49.373014   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:35:49.399273   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:35:49.423027   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:35:49.448409   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:35:49.473017   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:35:49.497246   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:35:49.521214   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:35:49.545423   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:35:49.569625   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:35:49.592856   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:35:49.619952   56329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:35:49.643999   56329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:35:49.662031   56329 ssh_runner.go:195] Run: openssl version
	I1207 21:35:49.668473   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:35:49.679541   56329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:35:49.684381   56329 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:35:49.684439   56329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:35:49.690010   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:35:49.700780   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:35:49.712016   56329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:35:49.717490   56329 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:35:49.717604   56329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:35:49.723733   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:35:49.736720   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:35:49.749722   56329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:35:49.754972   56329 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:35:49.755040   56329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:35:49.760840   56329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:35:49.773386   56329 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:35:49.778066   56329 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 21:35:49.778123   56329 kubeadm.go:404] StartCluster: {Name:newest-cni-155321 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:newest-cni-155321 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.117 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:35:49.778195   56329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:35:49.778234   56329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:35:49.827429   56329 cri.go:89] found id: ""
	I1207 21:35:49.827538   56329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:35:49.838602   56329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:35:49.848941   56329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:35:49.859558   56329 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:35:49.859613   56329 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:35:50.071275   56329 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:35:50.071376   56329 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:35:50.348062   56329 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:35:50.348190   56329 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:35:50.348303   56329 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:35:50.582205   56329 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:35:50.771840   56329 out.go:204]   - Generating certificates and keys ...
	I1207 21:35:50.772019   56329 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:35:50.772132   56329 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:35:50.772239   56329 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 21:35:51.074380   56329 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 21:35:51.135896   56329 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 21:35:51.315507   56329 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 21:35:51.691572   56329 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 21:35:51.691736   56329 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-155321] and IPs [192.168.61.117 127.0.0.1 ::1]
	I1207 21:35:51.958330   56329 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 21:35:51.958602   56329 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-155321] and IPs [192.168.61.117 127.0.0.1 ::1]
	I1207 21:35:52.167113   56329 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 21:35:52.241839   56329 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 21:35:52.288181   56329 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 21:35:52.288287   56329 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:35:52.386952   56329 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:35:52.625019   56329 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:35:53.056696   56329 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:35:53.298335   56329 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:35:53.450371   56329 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:35:53.451067   56329 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:35:53.455049   56329 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:35:53.456783   56329 out.go:204]   - Booting up control plane ...
	I1207 21:35:53.456928   56329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:35:53.457045   56329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:35:53.457143   56329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:35:53.474362   56329 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:35:53.476209   56329 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:35:53.476276   56329 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:35:53.623737   56329 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:36:01.629722   56329 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006470 seconds
	I1207 21:36:01.647893   56329 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:36:01.666159   56329 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:36:02.196029   56329 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:36:02.196276   56329 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-155321 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:36:02.711143   56329 kubeadm.go:322] [bootstrap-token] Using token: 2rw90v.snbydbzv6himqjn1
	I1207 21:36:02.712779   56329 out.go:204]   - Configuring RBAC rules ...
	I1207 21:36:02.712912   56329 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:36:02.718553   56329 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:36:02.732174   56329 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:36:02.736620   56329 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:36:02.740798   56329 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:36:02.745188   56329 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:36:02.763366   56329 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:36:03.056808   56329 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:36:03.126406   56329 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:36:03.126715   56329 kubeadm.go:322] 
	I1207 21:36:03.126819   56329 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:36:03.126832   56329 kubeadm.go:322] 
	I1207 21:36:03.126921   56329 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:36:03.126929   56329 kubeadm.go:322] 
	I1207 21:36:03.126964   56329 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:36:03.127046   56329 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:36:03.127126   56329 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:36:03.127135   56329 kubeadm.go:322] 
	I1207 21:36:03.127229   56329 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:36:03.127240   56329 kubeadm.go:322] 
	I1207 21:36:03.127304   56329 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:36:03.127314   56329 kubeadm.go:322] 
	I1207 21:36:03.127389   56329 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:36:03.127493   56329 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:36:03.127605   56329 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:36:03.127616   56329 kubeadm.go:322] 
	I1207 21:36:03.127718   56329 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:36:03.127812   56329 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:36:03.127823   56329 kubeadm.go:322] 
	I1207 21:36:03.127926   56329 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2rw90v.snbydbzv6himqjn1 \
	I1207 21:36:03.128015   56329 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:36:03.128036   56329 kubeadm.go:322] 	--control-plane 
	I1207 21:36:03.128041   56329 kubeadm.go:322] 
	I1207 21:36:03.128110   56329 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:36:03.128117   56329 kubeadm.go:322] 
	I1207 21:36:03.128181   56329 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2rw90v.snbydbzv6himqjn1 \
	I1207 21:36:03.128270   56329 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:36:03.129003   56329 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:36:03.129031   56329 cni.go:84] Creating CNI manager for ""
	I1207 21:36:03.129040   56329 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:36:03.130911   56329 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:36:03.132468   56329 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:36:03.170107   56329 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:36:03.202613   56329 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:36:03.202688   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:03.202753   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=newest-cni-155321 minikube.k8s.io/updated_at=2023_12_07T21_36_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:03.636083   56329 ops.go:34] apiserver oom_adj: -16
	I1207 21:36:03.636110   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:03.734108   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:04.333354   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:04.833228   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:05.333297   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:05.833579   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:06.333781   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:06.833034   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:07.333171   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:07.832814   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:08.333626   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:08.833426   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:09.333742   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:09.833051   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:10.332917   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:10.833033   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:11.333420   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:11.833176   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:36:12.333413   56329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:15:53 UTC, ends at Thu 2023-12-07 21:36:16 UTC. --
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.829928752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984975829903181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2a971ca0-dcff-43f1-9880-e4c801e1690e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.830828133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=292fa55c-3ca3-41f0-b45d-94eada4bb3bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.830926463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=292fa55c-3ca3-41f0-b45d-94eada4bb3bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.831180259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=292fa55c-3ca3-41f0-b45d-94eada4bb3bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.881123440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=585d3505-f4d1-49f6-80aa-6287b90a72f5 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.881205477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=585d3505-f4d1-49f6-80aa-6287b90a72f5 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.883518431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=90f5c402-55bd-4d50-978d-ee518914270c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.884052050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984975884031766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=90f5c402-55bd-4d50-978d-ee518914270c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.884901431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c99b39b5-fac7-4983-b9b3-249bccce26a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.884974176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c99b39b5-fac7-4983-b9b3-249bccce26a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.885203617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c99b39b5-fac7-4983-b9b3-249bccce26a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.928413751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=afdbb7b5-e681-4e76-a02b-b72a945cfa86 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.928481193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=afdbb7b5-e681-4e76-a02b-b72a945cfa86 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.930479487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d8ca0475-289a-4125-8b47-68e99f6186f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.930875815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984975930861381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d8ca0475-289a-4125-8b47-68e99f6186f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.931862945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0267be62-cde8-43e4-a37c-1c97281fa278 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.931907167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0267be62-cde8-43e4-a37c-1c97281fa278 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.932910809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0267be62-cde8-43e4-a37c-1c97281fa278 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.988200774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f67e275c-1c3f-4368-8169-8d467d2c04a3 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.988388824Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f67e275c-1c3f-4368-8169-8d467d2c04a3 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.990939496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1fd2467a-e642-4fa2-8027-e565a3a4d4dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.991517135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984975991496349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1fd2467a-e642-4fa2-8027-e565a3a4d4dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.992536279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b0d26ae2-b659-4104-a661-41eddd70fae1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.992635232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b0d26ae2-b659-4104-a661-41eddd70fae1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:36:15 no-preload-950431 crio[713]: time="2023-12-07 21:36:15.992942227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558,PodSandboxId:954f69cb07067d93d138b8d3b21f6e74683655fc2356636293aab3e5feb2c4ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701984100548464744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9400eb14-80e0-4725-906e-b80cd7e998a1,},Annotations:map[string]string{io.kubernetes.container.hash: 71f51c6e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54,PodSandboxId:336e55d5fcc5980970adea2e49bcb938aad4643558b4687c2a42eb63264aaebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701984100412535301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6v8td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268d28d1-60a9-4323-b36f-883388fbdcea,},Annotations:map[string]string{io.kubernetes.container.hash: cdf23620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1,PodSandboxId:2b4ad458538851e7d650642af6496119ba7b16dc8224cd0760809b17ee15f65f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701984099422506625,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-cz2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5757c023-02cd-4be8-b4cc-6b45154f7b5a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bfe25e3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e,PodSandboxId:e541022f9d01c7c30c00b31c6e70476a08a4cd2c6a733f96ddbd9b75cb67b4d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701984076694580067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96e722d20ddbab6255f365e76f46cc68,},Annotations:map[string]string{io.kubernetes.container.hash: 55c76d36,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72,PodSandboxId:68aa3031878817a959ffbcf229875292ee66252e148574554751cce4e912e5ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701984076515910746,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c438697617426137ace4267c786049d,},Annotations:map
[string]string{io.kubernetes.container.hash: 703d180b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9,PodSandboxId:7524486cd2b1302f63c513126940587fe29ae1868b1f42066ea842c02cf4944c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701984076132575148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57367836cee7f9cd3e80bdbd52661bc3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f,PodSandboxId:e08ecb9106195236828079e12569898f281c25eecf449e99336fbeab0af9e97b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701984076283574017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-950431,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff3caf8698d5a46a55e9ed3203d0a59,},An
notations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b0d26ae2-b659-4104-a661-41eddd70fae1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a94bd233c537       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   954f69cb07067       storage-provisioner
	c7b82c33266c8       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff   14 minutes ago      Running             kube-proxy                0                   336e55d5fcc59       kube-proxy-6v8td
	54a649b860356       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   2b4ad45853885       coredns-76f75df574-cz2xd
	7b11510039f6a       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956   14 minutes ago      Running             kube-apiserver            2                   e541022f9d01c       kube-apiserver-no-preload-950431
	aa3d15a27f8f9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   68aa303187881       etcd-no-preload-950431
	5cde6958dd3c4       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09   14 minutes ago      Running             kube-controller-manager   2                   e08ecb9106195       kube-controller-manager-no-preload-950431
	13af9806c4e50       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542   14 minutes ago      Running             kube-scheduler            2                   7524486cd2b13       kube-scheduler-no-preload-950431
	
	* 
	* ==> coredns [54a649b8603569d15e90bcce4de2616fea81d0af3d462a8f26bd21824e8047a1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50046 - 42970 "HINFO IN 9151674908356452295.5213838015451573474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0223671s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-950431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-950431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=no-preload-950431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-950431
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 21:36:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:31:55 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:31:55 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:31:55 +0000   Thu, 07 Dec 2023 21:21:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:31:55 +0000   Thu, 07 Dec 2023 21:21:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.100
	  Hostname:    no-preload-950431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fc7293a6643464ba6a5d7a0a1cbcb0b
	  System UUID:                8fc7293a-6643-464b-a6a5-d7a0a1cbcb0b
	  Boot ID:                    affc1820-b0ed-4b55-b3dd-646f094aba6b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-cz2xd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-950431                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-950431             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-950431    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6v8td                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-950431             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-ffkls              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-950431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-950431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-950431 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-950431 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-950431 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-950431 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node no-preload-950431 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeReady                14m                kubelet          Node no-preload-950431 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-950431 event: Registered Node no-preload-950431 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070110] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.588205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.528901] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150009] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.465046] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 7 21:16] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.115985] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.182337] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.137630] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.262103] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +30.083739] systemd-fstab-generator[1328]: Ignoring "noauto" for root device
	[ +22.406614] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 7 21:21] systemd-fstab-generator[3935]: Ignoring "noauto" for root device
	[  +9.323110] systemd-fstab-generator[4256]: Ignoring "noauto" for root device
	[ +13.305834] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.349782] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [aa3d15a27f8f9fa5de9244c9871c1731bcf83ab27c491a7ab7c7e88e17702f72] <==
	* {"level":"info","ts":"2023-12-07T21:21:19.261339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 received MsgVoteResp from 8a93cffd6fd293f3 at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.261347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a93cffd6fd293f3 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.261356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8a93cffd6fd293f3 elected leader 8a93cffd6fd293f3 at term 2"}
	{"level":"info","ts":"2023-12-07T21:21:19.262852Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8a93cffd6fd293f3","local-member-attributes":"{Name:no-preload-950431 ClientURLs:[https://192.168.50.100:2379]}","request-path":"/0/members/8a93cffd6fd293f3/attributes","cluster-id":"6ddf9aff62617c59","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T21:21:19.263038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:21:19.263494Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.263654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T21:21:19.264044Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T21:21:19.264086Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T21:21:19.266028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.100:2379"}
	{"level":"info","ts":"2023-12-07T21:21:19.266158Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6ddf9aff62617c59","local-member-id":"8a93cffd6fd293f3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.266303Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.266358Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T21:21:19.268553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T21:31:19.317091Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2023-12-07T21:31:19.319709Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"2.102484ms","hash":864524324}
	{"level":"info","ts":"2023-12-07T21:31:19.319746Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":864524324,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2023-12-07T21:35:50.120476Z","caller":"traceutil/trace.go:171","msg":"trace[216024348] linearizableReadLoop","detail":"{readStateIndex:1374; appliedIndex:1373; }","duration":"203.639931ms","start":"2023-12-07T21:35:49.91679Z","end":"2023-12-07T21:35:50.12043Z","steps":["trace[216024348] 'read index received'  (duration: 203.386721ms)","trace[216024348] 'applied index is now lower than readState.Index'  (duration: 252.553µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T21:35:50.120715Z","caller":"traceutil/trace.go:171","msg":"trace[535101450] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"322.197656ms","start":"2023-12-07T21:35:49.798502Z","end":"2023-12-07T21:35:50.1207Z","steps":["trace[535101450] 'process raft request'  (duration: 321.796467ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:35:50.121901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:35:49.798484Z","time spent":"322.529037ms","remote":"127.0.0.1:38340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-gg7ny564dmejvjonxyfxgcdqze\" mod_revision:1178 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-gg7ny564dmejvjonxyfxgcdqze\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-gg7ny564dmejvjonxyfxgcdqze\" > >"}
	{"level":"warn","ts":"2023-12-07T21:35:50.120898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.019344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2023-12-07T21:35:50.123121Z","caller":"traceutil/trace.go:171","msg":"trace[643288321] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1186; }","duration":"206.346532ms","start":"2023-12-07T21:35:49.916753Z","end":"2023-12-07T21:35:50.123099Z","steps":["trace[643288321] 'agreement among raft nodes before linearized reading'  (duration: 203.994253ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T21:35:50.476043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.501255ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10661018975863425178 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1185 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-07T21:35:50.476171Z","caller":"traceutil/trace.go:171","msg":"trace[1292194466] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"348.472164ms","start":"2023-12-07T21:35:50.127687Z","end":"2023-12-07T21:35:50.47616Z","steps":["trace[1292194466] 'process raft request'  (duration: 89.630907ms)","trace[1292194466] 'compare'  (duration: 258.349631ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T21:35:50.476325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-07T21:35:50.127672Z","time spent":"348.618039ms","remote":"127.0.0.1:38318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1185 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  21:36:16 up 20 min,  0 users,  load average: 0.27, 0.19, 0.21
	Linux no-preload-950431 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7b11510039f6adcf3de1bc80032f50d351bac5b29588bda709d3c301dad0668e] <==
	* I1207 21:29:21.791566       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:31:20.792828       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:31:20.792990       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1207 21:31:21.793537       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:31:21.793693       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:31:21.793726       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:31:21.793964       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:31:21.794102       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:31:21.795278       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:32:21.794130       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:32:21.794419       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:32:21.794500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:32:21.795745       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:32:21.795807       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:32:21.795816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:34:21.795496       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:34:21.795725       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:34:21.795739       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 21:34:21.796995       1 handler_proxy.go:93] no RequestInfo found in the context
	E1207 21:34:21.797060       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1207 21:34:21.797077       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5cde6958dd3c4c4f1bc5b359ca4cff102e9fd270d658608e572688c04b4b231f] <==
	* I1207 21:30:36.528011       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:31:06.048682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:31:06.539169       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:31:36.055192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:31:36.548460       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:32:06.061723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:32:06.557525       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:32:36.067859       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:32:36.566671       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1207 21:32:43.308006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="92.757µs"
	I1207 21:32:56.310599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="356.835µs"
	E1207 21:33:06.075538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:06.576379       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:33:36.080770       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:33:36.585855       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:34:06.086156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:06.594129       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:34:36.091988       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:34:36.604430       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:06.099994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:06.614838       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:35:36.106044       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:35:36.634427       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1207 21:36:06.114303       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1207 21:36:06.645533       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [c7b82c33266c8cd496db092deef6e9921b53aadba47626e760e1294ea1409e54] <==
	* I1207 21:21:40.859296       1 server_others.go:72] "Using iptables proxy"
	I1207 21:21:40.877497       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.100"]
	I1207 21:21:40.932527       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1207 21:21:40.932606       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 21:21:40.932626       1 server_others.go:168] "Using iptables Proxier"
	I1207 21:21:40.936719       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 21:21:40.936910       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1207 21:21:40.936953       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 21:21:40.939612       1 config.go:188] "Starting service config controller"
	I1207 21:21:40.939680       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 21:21:40.939718       1 config.go:97] "Starting endpoint slice config controller"
	I1207 21:21:40.939735       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 21:21:40.944712       1 config.go:315] "Starting node config controller"
	I1207 21:21:40.944785       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 21:21:41.040652       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 21:21:41.040715       1 shared_informer.go:318] Caches are synced for service config
	I1207 21:21:41.044872       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [13af9806c4e5091a10d6775e7166368534650fbeacb2005e0a0355d27b1970d9] <==
	* W1207 21:21:20.803505       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:20.803683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:20.805092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:20.806786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:21.606481       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 21:21:21.606545       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 21:21:21.686916       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:21.687044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:21.761903       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:21:21.762448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 21:21:21.796131       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:21:21.796396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 21:21:21.798748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:21:21.798808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 21:21:21.961495       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:21.961552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:22.033518       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:21:22.033626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 21:21:22.053600       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:21:22.053660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 21:21:22.063432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:22.063541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 21:21:22.101425       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 21:21:22.101551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1207 21:21:23.996140       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:15:53 UTC, ends at Thu 2023-12-07 21:36:16 UTC. --
	Dec 07 21:33:24 no-preload-950431 kubelet[4263]: E1207 21:33:24.311158    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:33:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:33:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:33:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:33:34 no-preload-950431 kubelet[4263]: E1207 21:33:34.286925    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:33:49 no-preload-950431 kubelet[4263]: E1207 21:33:49.286788    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:34:00 no-preload-950431 kubelet[4263]: E1207 21:34:00.287605    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:34:11 no-preload-950431 kubelet[4263]: E1207 21:34:11.286905    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:34:24 no-preload-950431 kubelet[4263]: E1207 21:34:24.309777    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:34:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:34:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:34:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:34:25 no-preload-950431 kubelet[4263]: E1207 21:34:25.285753    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:34:39 no-preload-950431 kubelet[4263]: E1207 21:34:39.286146    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:34:51 no-preload-950431 kubelet[4263]: E1207 21:34:51.287178    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:35:06 no-preload-950431 kubelet[4263]: E1207 21:35:06.287720    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:35:17 no-preload-950431 kubelet[4263]: E1207 21:35:17.289821    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:35:24 no-preload-950431 kubelet[4263]: E1207 21:35:24.311839    4263 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 07 21:35:24 no-preload-950431 kubelet[4263]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 07 21:35:24 no-preload-950431 kubelet[4263]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 07 21:35:24 no-preload-950431 kubelet[4263]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 07 21:35:31 no-preload-950431 kubelet[4263]: E1207 21:35:31.286090    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:35:46 no-preload-950431 kubelet[4263]: E1207 21:35:46.287215    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:35:59 no-preload-950431 kubelet[4263]: E1207 21:35:59.288856    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	Dec 07 21:36:12 no-preload-950431 kubelet[4263]: E1207 21:36:12.287711    4263 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ffkls" podUID="e571e115-9e30-4be3-b77c-27db27a95feb"
	
	* 
	* ==> storage-provisioner [9a94bd233c53753083d49569b9f67d5bcca6dcbd661e3423a60f8f1e25313558] <==
	* I1207 21:21:40.784738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:21:40.804638       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:21:40.804750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:21:40.814504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:21:40.814715       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039!
	I1207 21:21:40.815867       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbb7d48e-5ba0-415f-b255-9b1b2a4e906e", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039 became leader
	I1207 21:21:40.915412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-950431_aa3775fe-d082-4576-986b-c84b350e0039!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-950431 -n no-preload-950431
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-950431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ffkls
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls: exit status 1 (71.527699ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ffkls" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-950431 describe pod metrics-server-57f55c9bc5-ffkls: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1207 21:34:28.941249   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-483745 -n old-k8s-version-483745
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-07 21:35:13.977539241 +0000 UTC m=+5638.145684206
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-483745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-483745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.779µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-483745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-483745 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-483745 logs -n 25: (1.613271068s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-620116 -- sudo                         | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-620116                                 | cert-options-620116          | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:06 UTC |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:10 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:06 UTC | 07 Dec 23 21:08 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-099448                              | stopped-upgrade-099448       | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:07 UTC |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:07 UTC | 07 Dec 23 21:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-483745        | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-763966                                        | pause-763966                 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-121798 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:08 UTC |
	|         | disable-driver-mounts-121798                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:08 UTC | 07 Dec 23 21:10 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-598346            | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC | 07 Dec 23 21:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-950431             | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-275828  | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-483745             | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-483745                              | old-k8s-version-483745       | jenkins | v1.32.0 | 07 Dec 23 21:10 UTC | 07 Dec 23 21:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-598346                 | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-598346                                  | embed-certs-598346           | jenkins | v1.32.0 | 07 Dec 23 21:11 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-950431                  | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-275828       | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-950431                                   | no-preload-950431            | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-275828 | jenkins | v1.32.0 | 07 Dec 23 21:12 UTC | 07 Dec 23 21:21 UTC |
	|         | default-k8s-diff-port-275828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 21:12:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 21:12:54.827966   51113 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:12:54.828121   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828131   51113 out.go:309] Setting ErrFile to fd 2...
	I1207 21:12:54.828138   51113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:12:54.828309   51113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:12:54.828894   51113 out.go:303] Setting JSON to false
	I1207 21:12:54.829778   51113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6921,"bootTime":1701976654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:12:54.829872   51113 start.go:138] virtualization: kvm guest
	I1207 21:12:54.832359   51113 out.go:177] * [default-k8s-diff-port-275828] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:12:54.833958   51113 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:12:54.833997   51113 notify.go:220] Checking for updates...
	I1207 21:12:54.835484   51113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:12:54.837345   51113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:12:54.838716   51113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:12:54.840105   51113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:12:54.841497   51113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:12:54.843170   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:12:54.843587   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.843638   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.857987   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I1207 21:12:54.858345   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.858826   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.858846   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.859141   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.859317   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.859528   51113 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:12:54.859797   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:12:54.859827   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:12:54.873523   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1207 21:12:54.873866   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:12:54.874374   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:12:54.874399   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:12:54.874726   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:12:54.874907   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:12:54.906909   51113 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 21:12:54.908496   51113 start.go:298] selected driver: kvm2
	I1207 21:12:54.908515   51113 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.908626   51113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:12:54.909287   51113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.909431   51113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 21:12:54.924711   51113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 21:12:54.925077   51113 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 21:12:54.925136   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:12:54.925149   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:12:54.925174   51113 start_flags.go:323] config:
	{Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-27582
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:12:54.925311   51113 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:54.927216   51113 out.go:177] * Starting control plane node default-k8s-diff-port-275828 in cluster default-k8s-diff-port-275828
	I1207 21:12:51.859250   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:12:51.859366   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:12:51.859440   51037 cache.go:107] acquiring lock: {Name:mke7b9cce1dd6177935767b47cf17b792acd813b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859507   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 21:12:51.859492   51037 cache.go:107] acquiring lock: {Name:mk57eae37995939df6ffd0df03832314e9e6100e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859493   51037 cache.go:107] acquiring lock: {Name:mk5a91936dc04372c96de7514149d2b4b0d17dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859522   51037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.402µs
	I1207 21:12:51.859538   51037 cache.go:107] acquiring lock: {Name:mk4c716c1104ca016c5e335d1cbf204f19d0197f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859560   51037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 21:12:51.859581   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 21:12:51.859591   51037 start.go:365] acquiring machines lock for no-preload-950431: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:12:51.859593   51037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 111.482µs
	I1207 21:12:51.859611   51037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859596   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 21:12:51.859564   51037 cache.go:107] acquiring lock: {Name:mke02250ffd1d3b6fb4470dd05093397053b289d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859627   51037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 139.857µs
	I1207 21:12:51.859637   51037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859588   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 21:12:51.859647   51037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 112.196µs
	I1207 21:12:51.859621   51037 cache.go:107] acquiring lock: {Name:mk2a1c8afaf74efaf0daac8bf102ee63aa4b5154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859664   51037 cache.go:107] acquiring lock: {Name:mk042626599761dccdc47fcf8ee95d59d24917b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859660   51037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 21:12:51.859443   51037 cache.go:107] acquiring lock: {Name:mk69e12850117516cff168d811605a739d29808c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 21:12:51.859701   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 21:12:51.859715   51037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 185.872µs
	I1207 21:12:51.859736   51037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 21:12:51.859728   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 21:12:51.859750   51037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 313.668µs
	I1207 21:12:51.859758   51037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859796   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 21:12:51.859809   51037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 179.42µs
	I1207 21:12:51.859823   51037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 21:12:51.859808   51037 cache.go:115] /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1207 21:12:51.859910   51037 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 310.345µs
	I1207 21:12:51.859931   51037 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1207 21:12:51.859947   51037 cache.go:87] Successfully saved all images to host disk.
	I1207 21:12:57.714205   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:12:54.928473   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:12:54.928503   51113 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 21:12:54.928516   51113 cache.go:56] Caching tarball of preloaded images
	I1207 21:12:54.928608   51113 preload.go:174] Found /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 21:12:54.928621   51113 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1207 21:12:54.928718   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:12:54.928893   51113 start.go:365] acquiring machines lock for default-k8s-diff-port-275828: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:13:00.786234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:06.866234   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:09.938211   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:16.018206   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:19.090196   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:25.170164   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:28.242299   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:34.322194   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:37.394241   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:43.474183   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:46.546186   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:52.626214   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:13:55.698176   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:01.778218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:04.850228   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:10.930239   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:14.002222   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:20.082270   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:23.154237   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:29.234226   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:32.306242   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:38.386218   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:41.458157   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:47.538219   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:50.610223   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:56.690260   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:14:59.766215   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:05.842220   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:08.914154   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:14.994193   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:18.066232   50270 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.171:22: connect: no route to host
	I1207 21:15:21.070365   50624 start.go:369] acquired machines lock for "embed-certs-598346" in 3m44.734224905s
	I1207 21:15:21.070421   50624 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:21.070427   50624 fix.go:54] fixHost starting: 
	I1207 21:15:21.070755   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:21.070787   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:21.085298   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I1207 21:15:21.085643   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:21.086150   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:15:21.086172   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:21.086491   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:21.086681   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:21.086828   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:15:21.088256   50624 fix.go:102] recreateIfNeeded on embed-certs-598346: state=Stopped err=<nil>
	I1207 21:15:21.088283   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	W1207 21:15:21.088465   50624 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:21.090020   50624 out.go:177] * Restarting existing kvm2 VM for "embed-certs-598346" ...
	I1207 21:15:21.091364   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Start
	I1207 21:15:21.091521   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring networks are active...
	I1207 21:15:21.092215   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network default is active
	I1207 21:15:21.092551   50624 main.go:141] libmachine: (embed-certs-598346) Ensuring network mk-embed-certs-598346 is active
	I1207 21:15:21.092938   50624 main.go:141] libmachine: (embed-certs-598346) Getting domain xml...
	I1207 21:15:21.093647   50624 main.go:141] libmachine: (embed-certs-598346) Creating domain...
	I1207 21:15:21.067977   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:21.068024   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:15:21.070214   50270 machine.go:91] provisioned docker machine in 4m37.409386757s
	I1207 21:15:21.070272   50270 fix.go:56] fixHost completed within 4m37.430493841s
	I1207 21:15:21.070280   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 4m37.43051315s
	W1207 21:15:21.070299   50270 start.go:694] error starting host: provision: host is not running
	W1207 21:15:21.070399   50270 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1207 21:15:21.070408   50270 start.go:709] Will try again in 5 seconds ...
	I1207 21:15:22.319220   50624 main.go:141] libmachine: (embed-certs-598346) Waiting to get IP...
	I1207 21:15:22.320059   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.320432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.320505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.320416   51516 retry.go:31] will retry after 306.732639ms: waiting for machine to come up
	I1207 21:15:22.629026   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.629495   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.629523   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.629465   51516 retry.go:31] will retry after 244.665765ms: waiting for machine to come up
	I1207 21:15:22.875896   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:22.876248   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:22.876275   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:22.876210   51516 retry.go:31] will retry after 389.522298ms: waiting for machine to come up
	I1207 21:15:23.267728   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.268119   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.268140   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.268064   51516 retry.go:31] will retry after 521.34699ms: waiting for machine to come up
	I1207 21:15:23.790614   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:23.791043   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:23.791067   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:23.791002   51516 retry.go:31] will retry after 493.71234ms: waiting for machine to come up
	I1207 21:15:24.286698   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:24.287121   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:24.287145   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:24.287061   51516 retry.go:31] will retry after 736.984501ms: waiting for machine to come up
	I1207 21:15:25.025941   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:25.026294   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:25.026317   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:25.026256   51516 retry.go:31] will retry after 1.06643424s: waiting for machine to come up
	I1207 21:15:26.093760   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:26.094266   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:26.094306   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:26.094211   51516 retry.go:31] will retry after 1.226791228s: waiting for machine to come up
	I1207 21:15:26.072827   50270 start.go:365] acquiring machines lock for old-k8s-version-483745: {Name:mk16d25df85665897e0e0d3d8bc309da40cbcf97 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 21:15:27.322536   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:27.322912   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:27.322940   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:27.322857   51516 retry.go:31] will retry after 1.246504696s: waiting for machine to come up
	I1207 21:15:28.571241   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:28.571651   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:28.571677   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:28.571606   51516 retry.go:31] will retry after 2.084958391s: waiting for machine to come up
	I1207 21:15:30.658654   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:30.659047   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:30.659080   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:30.658990   51516 retry.go:31] will retry after 2.104944011s: waiting for machine to come up
	I1207 21:15:32.765669   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:32.766136   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:32.766167   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:32.766076   51516 retry.go:31] will retry after 3.05038185s: waiting for machine to come up
	I1207 21:15:35.819082   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:35.819446   50624 main.go:141] libmachine: (embed-certs-598346) DBG | unable to find current IP address of domain embed-certs-598346 in network mk-embed-certs-598346
	I1207 21:15:35.819477   50624 main.go:141] libmachine: (embed-certs-598346) DBG | I1207 21:15:35.819399   51516 retry.go:31] will retry after 3.445969037s: waiting for machine to come up
	I1207 21:15:40.686593   51037 start.go:369] acquired machines lock for "no-preload-950431" in 2m48.82697748s
	I1207 21:15:40.686639   51037 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:15:40.686646   51037 fix.go:54] fixHost starting: 
	I1207 21:15:40.687011   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:15:40.687043   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:15:40.703294   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I1207 21:15:40.703682   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:15:40.704245   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:15:40.704276   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:15:40.704620   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:15:40.704792   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:15:40.704938   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:15:40.706394   51037 fix.go:102] recreateIfNeeded on no-preload-950431: state=Stopped err=<nil>
	I1207 21:15:40.706420   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	W1207 21:15:40.706593   51037 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:15:40.709148   51037 out.go:177] * Restarting existing kvm2 VM for "no-preload-950431" ...
	I1207 21:15:39.269367   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269776   50624 main.go:141] libmachine: (embed-certs-598346) Found IP for machine: 192.168.72.180
	I1207 21:15:39.269802   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has current primary IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.269808   50624 main.go:141] libmachine: (embed-certs-598346) Reserving static IP address...
	I1207 21:15:39.270234   50624 main.go:141] libmachine: (embed-certs-598346) Reserved static IP address: 192.168.72.180
	I1207 21:15:39.270265   50624 main.go:141] libmachine: (embed-certs-598346) Waiting for SSH to be available...
	I1207 21:15:39.270279   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.270308   50624 main.go:141] libmachine: (embed-certs-598346) DBG | skip adding static IP to network mk-embed-certs-598346 - found existing host DHCP lease matching {name: "embed-certs-598346", mac: "52:54:00:15:56:8f", ip: "192.168.72.180"}
	I1207 21:15:39.270325   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Getting to WaitForSSH function...
	I1207 21:15:39.272292   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272639   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.272674   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.272773   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH client type: external
	I1207 21:15:39.272827   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa (-rw-------)
	I1207 21:15:39.272869   50624 main.go:141] libmachine: (embed-certs-598346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:15:39.272887   50624 main.go:141] libmachine: (embed-certs-598346) DBG | About to run SSH command:
	I1207 21:15:39.272903   50624 main.go:141] libmachine: (embed-certs-598346) DBG | exit 0
	I1207 21:15:39.363326   50624 main.go:141] libmachine: (embed-certs-598346) DBG | SSH cmd err, output: <nil>: 
	I1207 21:15:39.363757   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetConfigRaw
	I1207 21:15:39.364301   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.366828   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367157   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.367206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.367459   50624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/config.json ...
	I1207 21:15:39.367693   50624 machine.go:88] provisioning docker machine ...
	I1207 21:15:39.367713   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:39.367918   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368085   50624 buildroot.go:166] provisioning hostname "embed-certs-598346"
	I1207 21:15:39.368104   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.368241   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.370443   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.370771   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.370798   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.371044   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.371192   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371358   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.371507   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.371660   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.372058   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.372078   50624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-598346 && echo "embed-certs-598346" | sudo tee /etc/hostname
	I1207 21:15:39.498370   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-598346
	
	I1207 21:15:39.498394   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.501284   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501691   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.501711   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.501952   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.502135   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502267   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.502432   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.502604   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:39.503052   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:39.503091   50624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-598346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-598346/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-598346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:15:39.625683   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:15:39.625713   50624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:15:39.625735   50624 buildroot.go:174] setting up certificates
	I1207 21:15:39.625748   50624 provision.go:83] configureAuth start
	I1207 21:15:39.625760   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetMachineName
	I1207 21:15:39.626074   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:39.628753   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629102   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.629125   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.629277   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.631206   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631478   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.631507   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.631632   50624 provision.go:138] copyHostCerts
	I1207 21:15:39.631682   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:15:39.631698   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:15:39.631763   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:15:39.631844   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:15:39.631852   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:15:39.631874   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:15:39.631922   50624 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:15:39.631928   50624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:15:39.631951   50624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:15:39.631993   50624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.embed-certs-598346 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-598346]
	I1207 21:15:39.968036   50624 provision.go:172] copyRemoteCerts
	I1207 21:15:39.968098   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:15:39.968121   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:39.970937   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971356   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:39.971386   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:39.971627   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:39.971847   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:39.972010   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:39.972148   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.060156   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:15:40.082673   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 21:15:40.104263   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:15:40.125974   50624 provision.go:86] duration metric: configureAuth took 500.211549ms
	I1207 21:15:40.126012   50624 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:15:40.126233   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:15:40.126317   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.129108   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129484   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.129505   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.129662   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.129884   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130039   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.130197   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.130358   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.130677   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.130698   50624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:15:40.439407   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:15:40.439438   50624 machine.go:91] provisioned docker machine in 1.071729841s
	I1207 21:15:40.439451   50624 start.go:300] post-start starting for "embed-certs-598346" (driver="kvm2")
	I1207 21:15:40.439465   50624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:15:40.439504   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.439827   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:15:40.439860   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.442750   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443135   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.443160   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.443400   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.443623   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.443811   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.443974   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.531350   50624 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:15:40.535614   50624 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:15:40.535644   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:15:40.535720   50624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:15:40.535813   50624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:15:40.535938   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:15:40.543981   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:40.566714   50624 start.go:303] post-start completed in 127.248268ms
	I1207 21:15:40.566739   50624 fix.go:56] fixHost completed within 19.496310567s
	I1207 21:15:40.566763   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.569439   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569774   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.569791   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.569915   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.570085   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570257   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.570386   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.570534   50624 main.go:141] libmachine: Using SSH client type: native
	I1207 21:15:40.570842   50624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1207 21:15:40.570855   50624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:15:40.686455   50624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983740.637211698
	
	I1207 21:15:40.686479   50624 fix.go:206] guest clock: 1701983740.637211698
	I1207 21:15:40.686486   50624 fix.go:219] Guest: 2023-12-07 21:15:40.637211698 +0000 UTC Remote: 2023-12-07 21:15:40.566742665 +0000 UTC m=+244.381466877 (delta=70.469033ms)
	I1207 21:15:40.686503   50624 fix.go:190] guest clock delta is within tolerance: 70.469033ms
	I1207 21:15:40.686508   50624 start.go:83] releasing machines lock for "embed-certs-598346", held for 19.61610992s
	I1207 21:15:40.686533   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.686809   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:40.689665   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690046   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.690069   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.690242   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690903   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:15:40.690988   50624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:15:40.691035   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.691162   50624 ssh_runner.go:195] Run: cat /version.json
	I1207 21:15:40.691196   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:15:40.693712   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.693943   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694078   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694106   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694269   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694295   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:40.694333   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:40.694419   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694501   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:15:40.694580   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694685   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:15:40.694742   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.694816   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:15:40.694925   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:15:40.801618   50624 ssh_runner.go:195] Run: systemctl --version
	I1207 21:15:40.807496   50624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:15:40.967288   50624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:15:40.974223   50624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:15:40.974315   50624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:15:40.988391   50624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:15:40.988418   50624 start.go:475] detecting cgroup driver to use...
	I1207 21:15:40.988510   50624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:15:41.002379   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:15:41.016074   50624 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:15:41.016125   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:15:41.031096   50624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:15:41.044808   50624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:15:41.150630   50624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:15:40.710656   51037 main.go:141] libmachine: (no-preload-950431) Calling .Start
	I1207 21:15:40.710832   51037 main.go:141] libmachine: (no-preload-950431) Ensuring networks are active...
	I1207 21:15:40.711509   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network default is active
	I1207 21:15:40.711813   51037 main.go:141] libmachine: (no-preload-950431) Ensuring network mk-no-preload-950431 is active
	I1207 21:15:40.712201   51037 main.go:141] libmachine: (no-preload-950431) Getting domain xml...
	I1207 21:15:40.712860   51037 main.go:141] libmachine: (no-preload-950431) Creating domain...
	I1207 21:15:41.269009   50624 docker.go:219] disabling docker service ...
	I1207 21:15:41.269067   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:15:41.281800   50624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:15:41.293694   50624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:15:41.413774   50624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:15:41.523960   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:15:41.536474   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:15:41.553611   50624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:15:41.553668   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.562741   50624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:15:41.562831   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.571841   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.580887   50624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:15:41.590259   50624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:15:41.599349   50624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:15:41.607259   50624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:15:41.607314   50624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:15:41.619425   50624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:15:41.627826   50624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:15:41.736815   50624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:15:41.896418   50624 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:15:41.896505   50624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:15:41.901539   50624 start.go:543] Will wait 60s for crictl version
	I1207 21:15:41.901598   50624 ssh_runner.go:195] Run: which crictl
	I1207 21:15:41.905454   50624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:15:41.942196   50624 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:15:41.942267   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:41.986024   50624 ssh_runner.go:195] Run: crio --version
	I1207 21:15:42.034806   50624 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:15:42.036352   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetIP
	I1207 21:15:42.039304   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039704   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:15:42.039745   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:15:42.039930   50624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1207 21:15:42.043951   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:42.056473   50624 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:15:42.056535   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:42.099359   50624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:15:42.099459   50624 ssh_runner.go:195] Run: which lz4
	I1207 21:15:42.103324   50624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:15:42.107440   50624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:15:42.107476   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:15:44.063941   50624 crio.go:444] Took 1.960653 seconds to copy over tarball
	I1207 21:15:44.064018   50624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:15:41.955586   51037 main.go:141] libmachine: (no-preload-950431) Waiting to get IP...
	I1207 21:15:41.956530   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:41.956967   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:41.957004   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:41.956919   51634 retry.go:31] will retry after 266.143384ms: waiting for machine to come up
	I1207 21:15:42.224547   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.225112   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.225142   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.225060   51634 retry.go:31] will retry after 314.364486ms: waiting for machine to come up
	I1207 21:15:42.540722   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.541264   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.541294   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.541225   51634 retry.go:31] will retry after 447.845741ms: waiting for machine to come up
	I1207 21:15:42.990858   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:42.991283   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:42.991310   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:42.991246   51634 retry.go:31] will retry after 494.509595ms: waiting for machine to come up
	I1207 21:15:43.487745   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:43.488268   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:43.488305   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:43.488218   51634 retry.go:31] will retry after 517.471464ms: waiting for machine to come up
	I1207 21:15:44.007846   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.008291   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.008322   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.008247   51634 retry.go:31] will retry after 755.53339ms: waiting for machine to come up
	I1207 21:15:44.765367   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:44.765799   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:44.765827   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:44.765743   51634 retry.go:31] will retry after 947.674862ms: waiting for machine to come up
	I1207 21:15:45.715436   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:45.715859   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:45.715890   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:45.715811   51634 retry.go:31] will retry after 1.304063218s: waiting for machine to come up
	I1207 21:15:47.049597   50624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.985550761s)
	I1207 21:15:47.049622   50624 crio.go:451] Took 2.985655 seconds to extract the tarball
	I1207 21:15:47.049632   50624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:15:47.089358   50624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:15:47.145982   50624 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:15:47.146007   50624 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:15:47.146069   50624 ssh_runner.go:195] Run: crio config
	I1207 21:15:47.205864   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:15:47.205888   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:15:47.205904   50624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:15:47.205933   50624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-598346 NodeName:embed-certs-598346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:15:47.206106   50624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-598346"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:15:47.206189   50624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-598346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:15:47.206249   50624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:15:47.214998   50624 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:15:47.215065   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:15:47.223252   50624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 21:15:47.239698   50624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:15:47.258476   50624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1207 21:15:47.275957   50624 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1207 21:15:47.279689   50624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:15:47.295204   50624 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346 for IP: 192.168.72.180
	I1207 21:15:47.295234   50624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:15:47.295391   50624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:15:47.295436   50624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:15:47.295501   50624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/client.key
	I1207 21:15:47.295552   50624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key.379caec1
	I1207 21:15:47.295589   50624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key
	I1207 21:15:47.295686   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:15:47.295712   50624 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:15:47.295722   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:15:47.295748   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:15:47.295772   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:15:47.295795   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:15:47.295835   50624 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:15:47.296438   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:15:47.324057   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:15:47.350921   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:15:47.378603   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/embed-certs-598346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:15:47.405443   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:15:47.429942   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:15:47.455437   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:15:47.478735   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:15:47.503326   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:15:47.525886   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:15:47.549414   50624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:15:47.572018   50624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:15:47.590990   50624 ssh_runner.go:195] Run: openssl version
	I1207 21:15:47.597874   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:15:47.610087   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615875   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.615949   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:15:47.622941   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:15:47.632217   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:15:47.641323   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645877   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.645955   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:15:47.651452   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:15:47.660848   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:15:47.670225   50624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674620   50624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.674670   50624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:15:47.680118   50624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:15:47.689444   50624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:15:47.693775   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:15:47.699741   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:15:47.705442   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:15:47.710938   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:15:47.716367   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:15:47.721958   50624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:15:47.727403   50624 kubeadm.go:404] StartCluster: {Name:embed-certs-598346 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-598346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:15:47.727520   50624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:15:47.727599   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:47.771682   50624 cri.go:89] found id: ""
	I1207 21:15:47.771763   50624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:15:47.782923   50624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:15:47.782946   50624 kubeadm.go:636] restartCluster start
	I1207 21:15:47.783020   50624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:15:47.791494   50624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.792645   50624 kubeconfig.go:92] found "embed-certs-598346" server: "https://192.168.72.180:8443"
	I1207 21:15:47.794953   50624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:15:47.804014   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.804096   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.815412   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.815433   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:47.815503   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:47.825646   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.326356   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.326438   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.338771   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:48.826334   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:48.826405   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:48.837498   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.325998   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.326084   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.338197   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:49.825701   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:49.825821   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:49.842649   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.326181   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.326277   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.341560   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:50.826087   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:50.826183   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:50.841186   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:47.021061   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:47.021495   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:47.021519   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:47.021459   51634 retry.go:31] will retry after 1.183999845s: waiting for machine to come up
	I1207 21:15:48.206768   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:48.207222   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:48.207250   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:48.207183   51634 retry.go:31] will retry after 1.595211966s: waiting for machine to come up
	I1207 21:15:49.804832   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:49.805298   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:49.805328   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:49.805229   51634 retry.go:31] will retry after 2.126345359s: waiting for machine to come up
	I1207 21:15:51.325994   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.326083   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.338573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.826180   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:51.826253   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:51.837573   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.326115   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.326192   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.336984   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:52.826590   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:52.826681   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:52.837678   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.326205   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.326279   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.337579   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:53.826047   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:53.826145   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:53.840263   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.325765   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.325842   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.337452   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:54.825969   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:54.826063   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:54.837428   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.325968   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.326060   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.337128   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:55.826749   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:55.826832   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:55.838002   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:51.933915   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:51.934338   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:51.934372   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:51.934279   51634 retry.go:31] will retry after 2.448139802s: waiting for machine to come up
	I1207 21:15:54.384038   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:54.384399   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:54.384425   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:54.384351   51634 retry.go:31] will retry after 3.211975182s: waiting for machine to come up
	I1207 21:15:56.325893   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.326007   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.337698   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:56.825827   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:56.825964   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:56.836945   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.326560   50624 api_server.go:166] Checking apiserver status ...
	I1207 21:15:57.326637   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:15:57.337299   50624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:15:57.804902   50624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:15:57.804933   50624 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:15:57.804946   50624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:15:57.805023   50624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:15:57.846788   50624 cri.go:89] found id: ""
	I1207 21:15:57.846877   50624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:15:57.861513   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:15:57.869730   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:15:57.869781   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877777   50624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:15:57.877801   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:57.992244   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:58.878385   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.051985   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.136414   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:15:59.232261   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:15:59.232358   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.246262   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:59.760617   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.260132   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:00.760723   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:15:57.599056   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:15:57.599417   51037 main.go:141] libmachine: (no-preload-950431) DBG | unable to find current IP address of domain no-preload-950431 in network mk-no-preload-950431
	I1207 21:15:57.599444   51037 main.go:141] libmachine: (no-preload-950431) DBG | I1207 21:15:57.599382   51634 retry.go:31] will retry after 5.532381184s: waiting for machine to come up
	I1207 21:16:04.442905   51113 start.go:369] acquired machines lock for "default-k8s-diff-port-275828" in 3m9.513966804s
	I1207 21:16:04.442972   51113 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:04.442985   51113 fix.go:54] fixHost starting: 
	I1207 21:16:04.443390   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:04.443434   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:04.460087   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1207 21:16:04.460495   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:04.460991   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:04.461014   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:04.461405   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:04.461582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:04.461705   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:04.463304   51113 fix.go:102] recreateIfNeeded on default-k8s-diff-port-275828: state=Stopped err=<nil>
	I1207 21:16:04.463337   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	W1207 21:16:04.463494   51113 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:04.465895   51113 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-275828" ...
	I1207 21:16:04.467328   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Start
	I1207 21:16:04.467485   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring networks are active...
	I1207 21:16:04.468206   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network default is active
	I1207 21:16:04.468581   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Ensuring network mk-default-k8s-diff-port-275828 is active
	I1207 21:16:04.468943   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Getting domain xml...
	I1207 21:16:04.469483   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Creating domain...
	I1207 21:16:03.134233   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134762   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has current primary IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.134794   51037 main.go:141] libmachine: (no-preload-950431) Found IP for machine: 192.168.50.100
	I1207 21:16:03.134811   51037 main.go:141] libmachine: (no-preload-950431) Reserving static IP address...
	I1207 21:16:03.135186   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.135209   51037 main.go:141] libmachine: (no-preload-950431) Reserved static IP address: 192.168.50.100
	I1207 21:16:03.135230   51037 main.go:141] libmachine: (no-preload-950431) DBG | skip adding static IP to network mk-no-preload-950431 - found existing host DHCP lease matching {name: "no-preload-950431", mac: "52:54:00:80:97:8f", ip: "192.168.50.100"}
	I1207 21:16:03.135251   51037 main.go:141] libmachine: (no-preload-950431) DBG | Getting to WaitForSSH function...
	I1207 21:16:03.135265   51037 main.go:141] libmachine: (no-preload-950431) Waiting for SSH to be available...
	I1207 21:16:03.137331   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137662   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.137689   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.137792   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH client type: external
	I1207 21:16:03.137817   51037 main.go:141] libmachine: (no-preload-950431) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa (-rw-------)
	I1207 21:16:03.137854   51037 main.go:141] libmachine: (no-preload-950431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:03.137871   51037 main.go:141] libmachine: (no-preload-950431) DBG | About to run SSH command:
	I1207 21:16:03.137890   51037 main.go:141] libmachine: (no-preload-950431) DBG | exit 0
	I1207 21:16:03.229593   51037 main.go:141] libmachine: (no-preload-950431) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:03.230019   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetConfigRaw
	I1207 21:16:03.230604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.233069   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233426   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.233462   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.233661   51037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/config.json ...
	I1207 21:16:03.233837   51037 machine.go:88] provisioning docker machine ...
	I1207 21:16:03.233855   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:03.234081   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234254   51037 buildroot.go:166] provisioning hostname "no-preload-950431"
	I1207 21:16:03.234277   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.234386   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.236593   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.236859   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.236892   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.237079   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.237243   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237396   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.237522   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.237653   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.238000   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.238016   51037 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-950431 && echo "no-preload-950431" | sudo tee /etc/hostname
	I1207 21:16:03.374959   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-950431
	
	I1207 21:16:03.374999   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.377825   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378212   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.378247   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.378389   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.378604   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378763   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.378896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.379041   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.379363   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.379399   51037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-950431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-950431/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-950431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:03.510050   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:03.510081   51037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:03.510109   51037 buildroot.go:174] setting up certificates
	I1207 21:16:03.510119   51037 provision.go:83] configureAuth start
	I1207 21:16:03.510130   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetMachineName
	I1207 21:16:03.510367   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:03.512754   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513120   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.513151   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.513289   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.515546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.515894   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.515947   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.516066   51037 provision.go:138] copyHostCerts
	I1207 21:16:03.516119   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:03.516138   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:03.516206   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:03.516294   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:03.516303   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:03.516328   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:03.516398   51037 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:03.516406   51037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:03.516430   51037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:03.516480   51037 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.no-preload-950431 san=[192.168.50.100 192.168.50.100 localhost 127.0.0.1 minikube no-preload-950431]
	I1207 21:16:03.662663   51037 provision.go:172] copyRemoteCerts
	I1207 21:16:03.662732   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:03.662756   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.665043   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665344   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.665379   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.665523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.665713   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.665887   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.666049   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:03.757956   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:03.782348   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 21:16:03.806388   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 21:16:03.831058   51037 provision.go:86] duration metric: configureAuth took 320.927373ms
	I1207 21:16:03.831086   51037 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:03.831264   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:16:03.831365   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:03.834104   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834489   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:03.834535   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:03.834703   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:03.834901   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835087   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:03.835224   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:03.835370   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:03.835699   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:03.835721   51037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:04.154758   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:04.154783   51037 machine.go:91] provisioned docker machine in 920.933844ms
	I1207 21:16:04.154795   51037 start.go:300] post-start starting for "no-preload-950431" (driver="kvm2")
	I1207 21:16:04.154810   51037 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:04.154829   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.155148   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:04.155173   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.157776   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158131   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.158163   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.158336   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.158560   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.158733   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.158873   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.258325   51037 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:04.262930   51037 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:04.262950   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:04.263011   51037 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:04.263077   51037 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:04.263177   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:04.271602   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:04.303816   51037 start.go:303] post-start completed in 148.990598ms
	I1207 21:16:04.303849   51037 fix.go:56] fixHost completed within 23.617201529s
	I1207 21:16:04.303873   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.306576   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.306930   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.306962   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.307104   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.307326   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307458   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.307591   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.307773   51037 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:04.308242   51037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1207 21:16:04.308260   51037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:04.442724   51037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983764.388433819
	
	I1207 21:16:04.442748   51037 fix.go:206] guest clock: 1701983764.388433819
	I1207 21:16:04.442757   51037 fix.go:219] Guest: 2023-12-07 21:16:04.388433819 +0000 UTC Remote: 2023-12-07 21:16:04.303852803 +0000 UTC m=+192.597462932 (delta=84.581016ms)
	I1207 21:16:04.442797   51037 fix.go:190] guest clock delta is within tolerance: 84.581016ms
	I1207 21:16:04.442801   51037 start.go:83] releasing machines lock for "no-preload-950431", held for 23.756181397s
	I1207 21:16:04.442827   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.443065   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:04.446137   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446578   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.446612   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.446797   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447413   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447656   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:16:04.447732   51037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:04.447783   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.447902   51037 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:04.447923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:16:04.450882   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451025   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451253   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451280   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451470   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451481   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:04.451507   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:04.451654   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.451720   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:16:04.451923   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452043   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:16:04.452098   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.452561   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:16:04.452761   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:16:04.565982   51037 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:04.573821   51037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:04.741571   51037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:04.749951   51037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:04.750038   51037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:04.770148   51037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:04.770176   51037 start.go:475] detecting cgroup driver to use...
	I1207 21:16:04.770244   51037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:04.787798   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:04.802346   51037 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:04.802415   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:04.819638   51037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:04.836910   51037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:04.947330   51037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:05.087698   51037 docker.go:219] disabling docker service ...
	I1207 21:16:05.087794   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:05.104790   51037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:05.122187   51037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:05.252225   51037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:05.394598   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:05.408596   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:05.429804   51037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:05.429876   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.441617   51037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:05.441700   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.452787   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.462684   51037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:05.472827   51037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:05.485493   51037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:05.495282   51037 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:05.495367   51037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:05.512972   51037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:05.523817   51037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:05.674940   51037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:05.866827   51037 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:05.866913   51037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:05.873044   51037 start.go:543] Will wait 60s for crictl version
	I1207 21:16:05.873109   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:05.878484   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:05.919888   51037 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:05.919979   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:05.976795   51037 ssh_runner.go:195] Run: crio --version
	I1207 21:16:06.034745   51037 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1207 21:16:01.260865   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.760580   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:01.790951   50624 api_server.go:72] duration metric: took 2.55868777s to wait for apiserver process to appear ...
	I1207 21:16:01.790981   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:01.791000   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.338427   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:05.338467   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:05.338483   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.436356   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.436385   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:05.937143   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:05.943626   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:05.943656   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.036269   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetIP
	I1207 21:16:06.039546   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.039919   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:16:06.039968   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:16:06.040205   51037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:06.044899   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:06.061053   51037 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 21:16:06.061106   51037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:06.099113   51037 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1207 21:16:06.099136   51037 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:06.099196   51037 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.099225   51037 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.099246   51037 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 21:16:06.099283   51037 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.099314   51037 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.099229   51037 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.099419   51037 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.099484   51037 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100960   51037 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.100961   51037 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.101035   51037 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.100967   51037 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.100970   51037 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.100973   51037 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:06.234869   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.272014   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.275605   51037 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1207 21:16:06.275659   51037 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.275716   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.295068   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.329385   51037 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1207 21:16:06.329435   51037 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.329449   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 21:16:06.329486   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.356701   51037 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1207 21:16:06.356744   51037 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.356790   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.382536   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1207 21:16:06.389671   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.391917   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.399801   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 21:16:06.399908   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.399980   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.400067   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1207 21:16:06.409081   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.616824   51037 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1207 21:16:06.616864   51037 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1207 21:16:06.616876   51037 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.616884   51037 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.616923   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.616930   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.617038   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617075   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1207 21:16:06.617086   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617114   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:06.617122   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1207 21:16:06.617199   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1207 21:16:06.617272   51037 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1207 21:16:06.617286   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:06.617305   51037 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:06.617353   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:06.631975   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 21:16:06.632094   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1207 21:16:06.632181   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1207 21:16:06.436900   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.457077   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:06.457122   50624 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:06.936534   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:16:06.943658   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:16:06.952206   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:06.952239   50624 api_server.go:131] duration metric: took 5.161250619s to wait for apiserver health ...
	I1207 21:16:06.952251   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:16:06.952259   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:06.954179   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:05.844251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting to get IP...
	I1207 21:16:05.845419   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845793   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:05.845896   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:05.845790   51802 retry.go:31] will retry after 224.053393ms: waiting for machine to come up
	I1207 21:16:06.071071   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071521   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.071545   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.071464   51802 retry.go:31] will retry after 272.776477ms: waiting for machine to come up
	I1207 21:16:06.346126   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346739   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.346773   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.346683   51802 retry.go:31] will retry after 373.022784ms: waiting for machine to come up
	I1207 21:16:06.721567   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722089   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:06.722115   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:06.722029   51802 retry.go:31] will retry after 380.100559ms: waiting for machine to come up
	I1207 21:16:07.103408   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103853   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.103884   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.103798   51802 retry.go:31] will retry after 473.24776ms: waiting for machine to come up
	I1207 21:16:07.578548   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579087   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:07.579232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:07.579176   51802 retry.go:31] will retry after 892.826082ms: waiting for machine to come up
	I1207 21:16:08.473531   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474027   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:08.474058   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:08.473989   51802 retry.go:31] will retry after 1.042648737s: waiting for machine to come up
	I1207 21:16:09.518823   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:09.519363   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:09.519213   51802 retry.go:31] will retry after 948.481622ms: waiting for machine to come up
	I1207 21:16:06.955727   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:06.967724   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:06.990163   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:07.001387   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:07.001425   50624 system_pods.go:61] "coredns-5dd5756b68-hlpsb" [c1f9f7db-0741-483c-9e39-d6f0ce4715d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:07.001436   50624 system_pods.go:61] "etcd-embed-certs-598346" [acda3700-87a2-4442-94e6-1d17288e7cee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:07.001446   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [e1439056-061b-4add-a399-c55a816fba70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:07.001456   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [b4c80c36-da2c-4c46-b655-3c6bb2a96ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:07.001466   50624 system_pods.go:61] "kube-proxy-jqhnn" [e2635205-e67a-4b56-a7b4-82fe97b5fe7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:07.001490   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [3b90e1d4-9c0f-46e4-a7b7-5e42717a8b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:07.001499   50624 system_pods.go:61] "metrics-server-57f55c9bc5-sndh4" [9a052ce0-760f-4cfd-a958-971daa14ea02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:07.001511   50624 system_pods.go:61] "storage-provisioner" [bf244954-a1d7-4b51-9085-387e60d02792] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:07.001524   50624 system_pods.go:74] duration metric: took 11.336763ms to wait for pod list to return data ...
	I1207 21:16:07.001538   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:07.007697   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:07.007737   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:07.007752   50624 node_conditions.go:105] duration metric: took 6.207447ms to run NodePressure ...
	I1207 21:16:07.007770   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:07.287760   50624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297260   50624 kubeadm.go:787] kubelet initialised
	I1207 21:16:07.297285   50624 kubeadm.go:788] duration metric: took 9.495153ms waiting for restarted kubelet to initialise ...
	I1207 21:16:07.297296   50624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:07.304800   50624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.313488   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313523   50624 pod_ready.go:81] duration metric: took 8.689063ms waiting for pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.313535   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "coredns-5dd5756b68-hlpsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.313545   50624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.321603   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321637   50624 pod_ready.go:81] duration metric: took 8.078752ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.321649   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "etcd-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.321658   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.333040   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333068   50624 pod_ready.go:81] duration metric: took 11.399287ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.333081   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.333089   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:07.397606   50624 pod_ready.go:97] node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397632   50624 pod_ready.go:81] duration metric: took 64.53373ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:07.397642   50624 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-598346" hosting pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-598346" has status "Ready":"False"
	I1207 21:16:07.397648   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713161   50624 pod_ready.go:92] pod "kube-proxy-jqhnn" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:08.713188   50624 pod_ready.go:81] duration metric: took 1.315530906s waiting for pod "kube-proxy-jqhnn" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:08.713201   50624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:10.919896   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:07.059825   51037 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061030   51037 ssh_runner.go:235] Completed: which crictl: (3.443650725s)
	I1207 21:16:10.061121   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 21:16:10.061130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (3.443992158s)
	I1207 21:16:10.061160   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1207 21:16:10.061174   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.444033736s)
	I1207 21:16:10.061199   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1207 21:16:10.061225   51037 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061245   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1: (3.429236441s)
	I1207 21:16:10.061286   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1207 21:16:10.061294   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061296   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (3.429094571s)
	I1207 21:16:10.061330   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 21:16:10.061346   51037 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.001491955s)
	I1207 21:16:10.061361   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:10.061387   51037 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 21:16:10.061402   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:10.061430   51037 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:10.061469   51037 ssh_runner.go:195] Run: which crictl
	I1207 21:16:10.469685   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470224   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:10.470251   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:10.470187   51802 retry.go:31] will retry after 1.846436384s: waiting for machine to come up
	I1207 21:16:12.319116   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319558   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:12.319590   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:12.319512   51802 retry.go:31] will retry after 1.415005437s: waiting for machine to come up
	I1207 21:16:13.736082   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736599   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:13.736630   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:13.736533   51802 retry.go:31] will retry after 2.499952402s: waiting for machine to come up
	I1207 21:16:13.413966   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:15.414181   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:14.287122   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.225788884s)
	I1207 21:16:14.287166   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1207 21:16:14.287165   51037 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: (4.226018563s)
	I1207 21:16:14.287190   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.287204   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287130   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.225706156s)
	I1207 21:16:14.287208   51037 ssh_runner.go:235] Completed: which crictl: (4.225716226s)
	I1207 21:16:14.287294   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:14.287310   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (4.225934747s)
	I1207 21:16:14.287322   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1207 21:16:14.287325   51037 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:14.287270   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1207 21:16:14.287238   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1207 21:16:14.338957   51037 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 21:16:14.339087   51037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:16.589704   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.302291312s)
	I1207 21:16:16.589740   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1207 21:16:16.589764   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589777   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.302463063s)
	I1207 21:16:16.589816   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1207 21:16:16.589817   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1207 21:16:16.589887   51037 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.250737859s)
	I1207 21:16:16.589912   51037 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1207 21:16:16.238979   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239340   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:16.239367   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:16.239304   51802 retry.go:31] will retry after 2.478988074s: waiting for machine to come up
	I1207 21:16:18.720359   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720892   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:18.720925   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:18.720840   51802 retry.go:31] will retry after 4.119588433s: waiting for machine to come up
	I1207 21:16:17.913477   50624 pod_ready.go:102] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.407386   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:18.407417   50624 pod_ready.go:81] duration metric: took 9.694207323s waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:18.407431   50624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:20.429952   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:18.142546   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (1.552699587s)
	I1207 21:16:18.142620   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1207 21:16:18.142658   51037 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:18.142737   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1207 21:16:20.432330   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.289556402s)
	I1207 21:16:20.432358   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1207 21:16:20.432386   51037 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:20.432436   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1207 21:16:22.843120   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843516   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | unable to find current IP address of domain default-k8s-diff-port-275828 in network mk-default-k8s-diff-port-275828
	I1207 21:16:22.843540   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | I1207 21:16:22.843470   51802 retry.go:31] will retry after 3.969701228s: waiting for machine to come up
	I1207 21:16:22.431295   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:24.929166   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:22.891954   51037 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.459495307s)
	I1207 21:16:22.891978   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1207 21:16:22.892001   51037 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:22.892056   51037 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 21:16:23.742939   51037 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 21:16:23.743011   51037 cache_images.go:123] Successfully loaded all cached images
	I1207 21:16:23.743021   51037 cache_images.go:92] LoadImages completed in 17.643875393s
	I1207 21:16:23.743107   51037 ssh_runner.go:195] Run: crio config
	I1207 21:16:23.802064   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:23.802087   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:23.802106   51037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:23.802128   51037 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-950431 NodeName:no-preload-950431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:23.802258   51037 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-950431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:23.802329   51037 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-950431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:23.802382   51037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 21:16:23.813052   51037 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:23.813143   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:23.823249   51037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1207 21:16:23.840999   51037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 21:16:23.857599   51037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1207 21:16:23.873664   51037 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:23.877208   51037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:23.888109   51037 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431 for IP: 192.168.50.100
	I1207 21:16:23.888148   51037 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:23.888298   51037 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:23.888333   51037 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:23.888394   51037 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.key
	I1207 21:16:23.888453   51037 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key.8f36cd02
	I1207 21:16:23.888490   51037 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key
	I1207 21:16:23.888598   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:23.888626   51037 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:23.888638   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:23.888669   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:23.888701   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:23.888725   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:23.888769   51037 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:23.889405   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:23.911313   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 21:16:23.935796   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:23.960576   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:23.983952   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:24.005755   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:24.027232   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:24.049398   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:24.073975   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:24.097326   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:24.118396   51037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:24.140590   51037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:24.157287   51037 ssh_runner.go:195] Run: openssl version
	I1207 21:16:24.163079   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:24.173618   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.177973   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.178038   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:24.183537   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:24.193750   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:24.203836   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208278   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.208324   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:24.213906   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:24.223939   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:24.234037   51037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238379   51037 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.238443   51037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:24.243650   51037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:24.253904   51037 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:24.258343   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:24.264011   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:24.269609   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:24.275294   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:24.280969   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:24.286763   51037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:24.292414   51037 kubeadm.go:404] StartCluster: {Name:no-preload-950431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-950431 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:24.292505   51037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:24.292565   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:24.342426   51037 cri.go:89] found id: ""
	I1207 21:16:24.342596   51037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:24.353900   51037 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:24.353939   51037 kubeadm.go:636] restartCluster start
	I1207 21:16:24.353999   51037 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:24.363465   51037 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.364722   51037 kubeconfig.go:92] found "no-preload-950431" server: "https://192.168.50.100:8443"
	I1207 21:16:24.367198   51037 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:24.378918   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.378971   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.391331   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.391354   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.391393   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.403003   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:24.903722   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:24.903814   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:24.915891   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.403459   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.403568   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.415677   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:25.903683   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:25.903765   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:25.915474   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:26.403146   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.403258   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.414072   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.031043   50270 start.go:369] acquired machines lock for "old-k8s-version-483745" in 1m1.958159244s
	I1207 21:16:28.031117   50270 start.go:96] Skipping create...Using existing machine configuration
	I1207 21:16:28.031127   50270 fix.go:54] fixHost starting: 
	I1207 21:16:28.031477   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:28.031504   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:28.047757   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1207 21:16:28.048134   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:28.048598   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:16:28.048628   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:28.048962   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:28.049123   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:28.049278   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:16:28.050698   50270 fix.go:102] recreateIfNeeded on old-k8s-version-483745: state=Stopped err=<nil>
	I1207 21:16:28.050716   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	W1207 21:16:28.050943   50270 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 21:16:28.053462   50270 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-483745" ...
	I1207 21:16:28.054995   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Start
	I1207 21:16:28.055169   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring networks are active...
	I1207 21:16:28.055803   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network default is active
	I1207 21:16:28.056167   50270 main.go:141] libmachine: (old-k8s-version-483745) Ensuring network mk-old-k8s-version-483745 is active
	I1207 21:16:28.056613   50270 main.go:141] libmachine: (old-k8s-version-483745) Getting domain xml...
	I1207 21:16:28.057267   50270 main.go:141] libmachine: (old-k8s-version-483745) Creating domain...
	I1207 21:16:26.815724   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816306   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Found IP for machine: 192.168.39.254
	I1207 21:16:26.816346   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserving static IP address...
	I1207 21:16:26.816373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has current primary IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.816843   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.816874   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Reserved static IP address: 192.168.39.254
	I1207 21:16:26.816895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | skip adding static IP to network mk-default-k8s-diff-port-275828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-275828", mac: "52:54:00:f3:1f:c5", ip: "192.168.39.254"}
	I1207 21:16:26.816916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Getting to WaitForSSH function...
	I1207 21:16:26.816933   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Waiting for SSH to be available...
	I1207 21:16:26.819265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819625   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.819654   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.819808   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH client type: external
	I1207 21:16:26.819840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa (-rw-------)
	I1207 21:16:26.819880   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:26.819908   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | About to run SSH command:
	I1207 21:16:26.819930   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | exit 0
	I1207 21:16:26.913932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:26.914232   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetConfigRaw
	I1207 21:16:26.915043   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:26.917486   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.917899   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.917944   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.918182   51113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/config.json ...
	I1207 21:16:26.918360   51113 machine.go:88] provisioning docker machine ...
	I1207 21:16:26.918380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:26.918587   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918775   51113 buildroot.go:166] provisioning hostname "default-k8s-diff-port-275828"
	I1207 21:16:26.918805   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:26.918971   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:26.921227   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921482   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:26.921515   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:26.921657   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:26.921818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922006   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:26.922162   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:26.922317   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:26.922695   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:26.922713   51113 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-275828 && echo "default-k8s-diff-port-275828" | sudo tee /etc/hostname
	I1207 21:16:27.066745   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-275828
	
	I1207 21:16:27.066778   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.069493   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.069842   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.069895   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.070078   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.070295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070446   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.070596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.070824   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.071271   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.071302   51113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-275828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-275828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-275828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:27.206475   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:27.206503   51113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:27.206534   51113 buildroot.go:174] setting up certificates
	I1207 21:16:27.206545   51113 provision.go:83] configureAuth start
	I1207 21:16:27.206553   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetMachineName
	I1207 21:16:27.206818   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:27.209295   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209632   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.209666   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.209763   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.211882   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212147   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.212176   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.212250   51113 provision.go:138] copyHostCerts
	I1207 21:16:27.212306   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:27.212326   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:27.212396   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:27.212501   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:27.212511   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:27.212540   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:27.212617   51113 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:27.212627   51113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:27.212656   51113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:27.212728   51113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-275828 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube default-k8s-diff-port-275828]
	I1207 21:16:27.273212   51113 provision.go:172] copyRemoteCerts
	I1207 21:16:27.273291   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:27.273321   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.275905   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276185   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.276219   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.276380   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.276569   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.276703   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.276814   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.371834   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:27.394096   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1207 21:16:27.416619   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:27.443103   51113 provision.go:86] duration metric: configureAuth took 236.548224ms
	I1207 21:16:27.443127   51113 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:27.443336   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:27.443406   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.446005   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446303   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.446334   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.446477   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.446648   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.446959   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.447158   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.447600   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.447623   51113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:27.760539   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:27.760582   51113 machine.go:91] provisioned docker machine in 842.207987ms
	I1207 21:16:27.760608   51113 start.go:300] post-start starting for "default-k8s-diff-port-275828" (driver="kvm2")
	I1207 21:16:27.760617   51113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:27.760633   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:27.760993   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:27.761030   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.763527   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.763923   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.763968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.764077   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.764254   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.764386   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.764559   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:27.860772   51113 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:27.865258   51113 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:27.865285   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:27.865348   51113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:27.865422   51113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:27.865537   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:27.874901   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:27.896890   51113 start.go:303] post-start completed in 136.257327ms
	I1207 21:16:27.896912   51113 fix.go:56] fixHost completed within 23.453929111s
	I1207 21:16:27.896932   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:27.899422   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899740   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:27.899780   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:27.899916   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:27.900104   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900265   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:27.900400   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:27.900601   51113 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:27.900920   51113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.254 22 <nil> <nil>}
	I1207 21:16:27.900935   51113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:28.030917   51113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983787.976128099
	
	I1207 21:16:28.030936   51113 fix.go:206] guest clock: 1701983787.976128099
	I1207 21:16:28.030943   51113 fix.go:219] Guest: 2023-12-07 21:16:27.976128099 +0000 UTC Remote: 2023-12-07 21:16:27.896915587 +0000 UTC m=+213.119643923 (delta=79.212512ms)
	I1207 21:16:28.030970   51113 fix.go:190] guest clock delta is within tolerance: 79.212512ms
	I1207 21:16:28.030975   51113 start.go:83] releasing machines lock for "default-k8s-diff-port-275828", held for 23.588040931s
	I1207 21:16:28.031003   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.031255   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:28.033864   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034277   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.034318   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.034501   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035101   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035283   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:28.035354   51113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:28.035399   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.035519   51113 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:28.035543   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:28.038353   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038570   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038636   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.038675   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.038789   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.038993   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039013   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:28.039035   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:28.039152   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039189   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:28.039319   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.039368   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:28.039495   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:28.039619   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:28.161850   51113 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:28.167540   51113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:28.311477   51113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:28.319102   51113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:28.319177   51113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:28.334118   51113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:28.334138   51113 start.go:475] detecting cgroup driver to use...
	I1207 21:16:28.334187   51113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:28.351563   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:28.364950   51113 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:28.365015   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:28.380367   51113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:28.396070   51113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:28.504230   51113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:28.634829   51113 docker.go:219] disabling docker service ...
	I1207 21:16:28.634893   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:28.648955   51113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:28.660615   51113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:28.781577   51113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:28.899307   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:28.912673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:28.931310   51113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 21:16:28.931384   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.941006   51113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:28.941083   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.951712   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.963062   51113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:28.973981   51113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:28.984828   51113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:28.993884   51113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:28.993992   51113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:29.007812   51113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:29.017781   51113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:29.147958   51113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:29.329720   51113 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:29.329781   51113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:29.336048   51113 start.go:543] Will wait 60s for crictl version
	I1207 21:16:29.336109   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:16:29.340075   51113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:29.378207   51113 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:29.378289   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.438034   51113 ssh_runner.go:195] Run: crio --version
	I1207 21:16:29.487899   51113 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1207 21:16:29.489336   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetIP
	I1207 21:16:29.492387   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.492824   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:29.492858   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:29.493105   51113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:29.497882   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:29.510857   51113 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 21:16:29.510910   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:29.557513   51113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1207 21:16:29.557590   51113 ssh_runner.go:195] Run: which lz4
	I1207 21:16:29.561849   51113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:29.566351   51113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:29.566383   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1207 21:16:26.930512   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:29.442726   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:26.903645   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:26.903716   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:26.915728   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.403874   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.403939   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.415501   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:27.904082   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:27.904150   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:27.916404   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.404050   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.404143   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.416757   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:28.903144   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:28.903202   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:28.914709   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.403236   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.403324   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.415595   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.903823   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:29.903908   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:29.920093   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.403786   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.403864   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.417374   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:30.903246   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:30.903335   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:30.916333   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:31.403909   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.403984   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.418792   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:29.352362   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting to get IP...
	I1207 21:16:29.353395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.353871   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.353965   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.353847   51971 retry.go:31] will retry after 307.502031ms: waiting for machine to come up
	I1207 21:16:29.663412   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.663958   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.663990   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.663889   51971 retry.go:31] will retry after 328.013518ms: waiting for machine to come up
	I1207 21:16:29.993550   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:29.994129   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:29.994160   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:29.994066   51971 retry.go:31] will retry after 315.323859ms: waiting for machine to come up
	I1207 21:16:30.310570   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.311106   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.311139   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.311055   51971 retry.go:31] will retry after 547.317149ms: waiting for machine to come up
	I1207 21:16:30.859753   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:30.860500   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:30.860532   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:30.860479   51971 retry.go:31] will retry after 591.81737ms: waiting for machine to come up
	I1207 21:16:31.453939   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:31.454481   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:31.454508   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:31.454426   51971 retry.go:31] will retry after 818.736684ms: waiting for machine to come up
	I1207 21:16:32.274582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:32.275065   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:32.275100   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:32.275018   51971 retry.go:31] will retry after 865.865666ms: waiting for machine to come up
	I1207 21:16:33.142356   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:33.142713   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:33.142748   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:33.142655   51971 retry.go:31] will retry after 1.270743306s: waiting for machine to come up
	I1207 21:16:31.473652   51113 crio.go:444] Took 1.911834 seconds to copy over tarball
	I1207 21:16:31.473729   51113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:34.448164   51113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974406678s)
	I1207 21:16:34.448185   51113 crio.go:451] Took 2.974507 seconds to extract the tarball
	I1207 21:16:34.448196   51113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:34.493579   51113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:34.555669   51113 crio.go:496] all images are preloaded for cri-o runtime.
	I1207 21:16:34.555694   51113 cache_images.go:84] Images are preloaded, skipping loading
	I1207 21:16:34.555760   51113 ssh_runner.go:195] Run: crio config
	I1207 21:16:34.637813   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:34.637855   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:34.637874   51113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:34.637909   51113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-275828 NodeName:default-k8s-diff-port-275828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 21:16:34.638088   51113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-275828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:34.638186   51113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-275828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1207 21:16:34.638255   51113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 21:16:34.651147   51113 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:34.651264   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:34.660855   51113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1207 21:16:34.678841   51113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:34.696338   51113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1207 21:16:34.718058   51113 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:34.722640   51113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:34.737097   51113 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828 for IP: 192.168.39.254
	I1207 21:16:34.737138   51113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:34.737316   51113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:34.737367   51113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:34.737459   51113 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.key
	I1207 21:16:34.737557   51113 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key.9e1cae77
	I1207 21:16:34.737614   51113 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key
	I1207 21:16:34.737745   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:34.737783   51113 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:34.737799   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:34.737835   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:34.737870   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:34.737904   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:34.737976   51113 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:34.738542   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:34.768389   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:34.801112   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:31.931027   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:34.430620   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:31.903642   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:31.903781   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:31.919330   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.403857   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.403949   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.419078   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:32.903477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:32.903561   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:32.918946   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.403477   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.403605   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.416411   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:33.903561   51037 api_server.go:166] Checking apiserver status ...
	I1207 21:16:33.903690   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:33.915554   51037 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:34.379314   51037 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:34.379347   51037 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:34.379361   51037 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:34.379450   51037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:34.427182   51037 cri.go:89] found id: ""
	I1207 21:16:34.427255   51037 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:34.448141   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:34.462411   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:34.462494   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474410   51037 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:34.474442   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:34.646144   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.548212   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.745964   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.818060   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:35.899490   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:35.899616   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:35.916336   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:36.432466   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:34.415333   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:34.415908   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:34.415935   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:34.415819   51971 retry.go:31] will retry after 1.846003214s: waiting for machine to come up
	I1207 21:16:36.262900   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:36.263321   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:36.263343   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:36.263283   51971 retry.go:31] will retry after 1.858599877s: waiting for machine to come up
	I1207 21:16:38.124144   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:38.124669   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:38.124701   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:38.124622   51971 retry.go:31] will retry after 2.443451278s: waiting for machine to come up
	I1207 21:16:34.830966   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 21:16:35.094040   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:35.121234   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:35.148659   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:35.176938   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:35.206320   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:35.234907   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:35.261034   51113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:35.286500   51113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:35.306742   51113 ssh_runner.go:195] Run: openssl version
	I1207 21:16:35.314676   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:35.325752   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332066   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.332147   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:35.339606   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:35.350274   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:35.360328   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365516   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.365593   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:35.371482   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:35.381328   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:35.391869   51113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.396986   51113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.397051   51113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:35.402939   51113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:35.413428   51113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:35.419598   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:35.427748   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:35.435492   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:35.442272   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:35.450180   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:35.459639   51113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:35.467615   51113 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-275828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-275828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:35.467736   51113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:35.467793   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:35.504593   51113 cri.go:89] found id: ""
	I1207 21:16:35.504685   51113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:35.514155   51113 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:35.514182   51113 kubeadm.go:636] restartCluster start
	I1207 21:16:35.514255   51113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:35.525515   51113 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.526798   51113 kubeconfig.go:92] found "default-k8s-diff-port-275828" server: "https://192.168.39.254:8444"
	I1207 21:16:35.529447   51113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:35.540876   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.540934   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.555494   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:35.555519   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:35.555569   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:35.569455   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.069801   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.069903   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.083366   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.569984   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:36.570078   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:36.585387   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.069869   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.069980   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.086900   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:37.570490   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:37.570597   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:37.586215   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.069601   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.069709   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.084557   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:38.570194   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:38.570306   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:38.586686   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.070433   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.070518   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.088460   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:39.570579   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:39.570654   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:39.588478   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:36.785543   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:38.932981   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:36.932228   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.432719   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:37.932863   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.432661   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.932210   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:38.965380   51037 api_server.go:72] duration metric: took 3.065893789s to wait for apiserver process to appear ...
	I1207 21:16:38.965409   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:38.965425   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:40.571221   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:40.571824   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:40.571873   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:40.571774   51971 retry.go:31] will retry after 2.349695925s: waiting for machine to come up
	I1207 21:16:42.923107   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:42.923582   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | unable to find current IP address of domain old-k8s-version-483745 in network mk-old-k8s-version-483745
	I1207 21:16:42.923618   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | I1207 21:16:42.923549   51971 retry.go:31] will retry after 4.503894046s: waiting for machine to come up
	I1207 21:16:40.070126   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.070229   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.085086   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:40.570237   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:40.570329   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:40.584997   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.069554   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.069706   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.084654   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:41.570175   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:41.570260   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:41.581973   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.070546   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.070641   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.085859   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:42.570428   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:42.570534   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:42.585491   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.070017   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.070132   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.082461   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.569992   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:43.570093   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:43.585221   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.069681   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.069749   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.081499   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:44.569999   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:44.570083   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:44.585512   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:43.598644   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.598675   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:43.598689   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:43.649508   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:43.649553   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:44.150221   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.155890   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.155914   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:44.649610   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:44.655402   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:44.655437   51037 api_server.go:103] status: https://192.168.50.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:45.150082   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:16:45.156432   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:16:45.172948   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:16:45.172983   51037 api_server.go:131] duration metric: took 6.207566234s to wait for apiserver health ...
	I1207 21:16:45.172996   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:16:45.173002   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:45.175018   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:41.430106   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:43.431417   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.932644   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:45.176436   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:45.231836   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:45.250256   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:45.270151   51037 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:45.270188   51037 system_pods.go:61] "coredns-76f75df574-qfwbr" [577161a0-8d68-41cc-88cd-1bd56e99b7aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:45.270198   51037 system_pods.go:61] "etcd-no-preload-950431" [8e49a6a7-c1e5-469d-9b30-c8e59471effb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:45.270210   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [15bc33db-995d-4102-9a2b-e991209c2946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:45.270220   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [c263b58e-2aea-455d-8b2f-8915f1c6e820] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:45.270232   51037 system_pods.go:61] "kube-proxy-mzv22" [96e51e2f-17be-4724-ae28-99dfa63e9976] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:45.270241   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [c040d573-c78f-4149-8be6-af33fc6ea186] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:45.270257   51037 system_pods.go:61] "metrics-server-57f55c9bc5-fv8x4" [ac03a70e-1059-474f-b6f6-5974f0900bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:45.270268   51037 system_pods.go:61] "storage-provisioner" [3f942481-221c-4e69-a876-f82676cde788] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:45.270279   51037 system_pods.go:74] duration metric: took 19.99813ms to wait for pod list to return data ...
	I1207 21:16:45.270291   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:45.274636   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:45.274667   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:45.274681   51037 node_conditions.go:105] duration metric: took 4.381452ms to run NodePressure ...
	I1207 21:16:45.274700   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.597857   51037 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603394   51037 kubeadm.go:787] kubelet initialised
	I1207 21:16:45.603423   51037 kubeadm.go:788] duration metric: took 5.535827ms waiting for restarted kubelet to initialise ...
	I1207 21:16:45.603432   51037 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:45.612509   51037 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:47.430850   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431364   50270 main.go:141] libmachine: (old-k8s-version-483745) Found IP for machine: 192.168.61.171
	I1207 21:16:47.431389   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserving static IP address...
	I1207 21:16:47.431415   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has current primary IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.431791   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.431827   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | skip adding static IP to network mk-old-k8s-version-483745 - found existing host DHCP lease matching {name: "old-k8s-version-483745", mac: "52:54:00:55:c8:35", ip: "192.168.61.171"}
	I1207 21:16:47.431845   50270 main.go:141] libmachine: (old-k8s-version-483745) Reserved static IP address: 192.168.61.171
	I1207 21:16:47.431866   50270 main.go:141] libmachine: (old-k8s-version-483745) Waiting for SSH to be available...
	I1207 21:16:47.431884   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Getting to WaitForSSH function...
	I1207 21:16:47.434071   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434391   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.434423   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.434511   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH client type: external
	I1207 21:16:47.434548   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Using SSH private key: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa (-rw-------)
	I1207 21:16:47.434590   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1207 21:16:47.434624   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | About to run SSH command:
	I1207 21:16:47.434642   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | exit 0
	I1207 21:16:47.529747   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | SSH cmd err, output: <nil>: 
	I1207 21:16:47.530150   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetConfigRaw
	I1207 21:16:47.530743   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.533361   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.533690   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.533728   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.534019   50270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/config.json ...
	I1207 21:16:47.534201   50270 machine.go:88] provisioning docker machine ...
	I1207 21:16:47.534219   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:47.534379   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534549   50270 buildroot.go:166] provisioning hostname "old-k8s-version-483745"
	I1207 21:16:47.534578   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.534793   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.537037   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537448   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.537482   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.537621   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.537788   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.537963   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.538107   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.538276   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.538728   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.538751   50270 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-483745 && echo "old-k8s-version-483745" | sudo tee /etc/hostname
	I1207 21:16:47.694514   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-483745
	
	I1207 21:16:47.694552   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.697720   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698181   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.698217   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.698413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:47.698602   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:47.698958   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:47.699158   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:47.699617   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:47.699646   50270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-483745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-483745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-483745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 21:16:47.851750   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 21:16:47.851781   50270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17719-9628/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-9628/.minikube}
	I1207 21:16:47.851817   50270 buildroot.go:174] setting up certificates
	I1207 21:16:47.851830   50270 provision.go:83] configureAuth start
	I1207 21:16:47.851848   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetMachineName
	I1207 21:16:47.852181   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:47.855229   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855607   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.855633   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.855891   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:47.858432   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.858811   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:47.858868   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:47.859066   50270 provision.go:138] copyHostCerts
	I1207 21:16:47.859126   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem, removing ...
	I1207 21:16:47.859146   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem
	I1207 21:16:47.859211   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/ca.pem (1082 bytes)
	I1207 21:16:47.859312   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem, removing ...
	I1207 21:16:47.859322   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem
	I1207 21:16:47.859352   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/cert.pem (1123 bytes)
	I1207 21:16:47.859426   50270 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem, removing ...
	I1207 21:16:47.859436   50270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem
	I1207 21:16:47.859465   50270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-9628/.minikube/key.pem (1675 bytes)
	I1207 21:16:47.859532   50270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-483745 san=[192.168.61.171 192.168.61.171 localhost 127.0.0.1 minikube old-k8s-version-483745]
	I1207 21:16:48.080700   50270 provision.go:172] copyRemoteCerts
	I1207 21:16:48.080764   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 21:16:48.080787   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.083799   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084261   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.084325   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.084545   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.084752   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.084874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.085025   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.188586   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 21:16:48.217051   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1207 21:16:48.245046   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 21:16:48.276344   50270 provision.go:86] duration metric: configureAuth took 424.496766ms
	I1207 21:16:48.276381   50270 buildroot.go:189] setting minikube options for container-runtime
	I1207 21:16:48.276627   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:16:48.276720   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.280119   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280556   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.280627   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.280943   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.281127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281312   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.281452   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.281621   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.282136   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.282160   50270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 21:16:45.070516   51113 api_server.go:166] Checking apiserver status ...
	I1207 21:16:45.070618   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:45.087880   51113 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:45.541593   51113 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:16:45.541627   51113 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:16:45.541640   51113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:16:45.541714   51113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:45.589291   51113 cri.go:89] found id: ""
	I1207 21:16:45.589394   51113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:16:45.606397   51113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:16:45.616135   51113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:16:45.616192   51113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625661   51113 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:16:45.625689   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:45.750072   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.619750   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.838835   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:46.935494   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:47.007474   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:16:47.007536   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.020817   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:47.536948   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.036982   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:48.537584   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.036899   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.537400   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:16:49.575582   51113 api_server.go:72] duration metric: took 2.568102787s to wait for apiserver process to appear ...
	I1207 21:16:49.575614   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:16:49.575636   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576140   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:49.576174   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:49.576630   51113 api_server.go:269] stopped: https://192.168.39.254:8444/healthz: Get "https://192.168.39.254:8444/healthz": dial tcp 192.168.39.254:8444: connect: connection refused
	I1207 21:16:48.639642   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 21:16:48.639702   50270 machine.go:91] provisioned docker machine in 1.10547448s
	I1207 21:16:48.639715   50270 start.go:300] post-start starting for "old-k8s-version-483745" (driver="kvm2")
	I1207 21:16:48.639733   50270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 21:16:48.639772   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.640106   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 21:16:48.640136   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.643155   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643592   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.643625   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.643897   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.644101   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.644253   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.644374   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.756527   50270 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 21:16:48.761976   50270 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 21:16:48.762042   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/addons for local assets ...
	I1207 21:16:48.762117   50270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-9628/.minikube/files for local assets ...
	I1207 21:16:48.762229   50270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem -> 168402.pem in /etc/ssl/certs
	I1207 21:16:48.762355   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 21:16:48.773495   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:48.802433   50270 start.go:303] post-start completed in 162.696963ms
	I1207 21:16:48.802464   50270 fix.go:56] fixHost completed within 20.771337135s
	I1207 21:16:48.802489   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.805389   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.805821   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.805853   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.806002   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.806221   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806361   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.806516   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.806737   50270 main.go:141] libmachine: Using SSH client type: native
	I1207 21:16:48.807177   50270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.171 22 <nil> <nil>}
	I1207 21:16:48.807194   50270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 21:16:48.948515   50270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701983808.895290650
	
	I1207 21:16:48.948602   50270 fix.go:206] guest clock: 1701983808.895290650
	I1207 21:16:48.948622   50270 fix.go:219] Guest: 2023-12-07 21:16:48.89529065 +0000 UTC Remote: 2023-12-07 21:16:48.802469186 +0000 UTC m=+365.320601213 (delta=92.821464ms)
	I1207 21:16:48.948679   50270 fix.go:190] guest clock delta is within tolerance: 92.821464ms
	I1207 21:16:48.948694   50270 start.go:83] releasing machines lock for "old-k8s-version-483745", held for 20.917606045s
	I1207 21:16:48.948726   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.948967   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:48.952007   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952392   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.952424   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.952680   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953302   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953494   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:16:48.953578   50270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 21:16:48.953633   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.953877   50270 ssh_runner.go:195] Run: cat /version.json
	I1207 21:16:48.953904   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:16:48.957083   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957288   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957631   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957656   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957798   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:48.957849   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:48.957874   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958105   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958110   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:16:48.958284   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:16:48.958413   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958443   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:16:48.958665   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:48.958668   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:16:49.082678   50270 ssh_runner.go:195] Run: systemctl --version
	I1207 21:16:49.091075   50270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 21:16:49.250638   50270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 21:16:49.259237   50270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 21:16:49.259312   50270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 21:16:49.279490   50270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 21:16:49.279520   50270 start.go:475] detecting cgroup driver to use...
	I1207 21:16:49.279592   50270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 21:16:49.301129   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 21:16:49.317758   50270 docker.go:203] disabling cri-docker service (if available) ...
	I1207 21:16:49.317832   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 21:16:49.335384   50270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 21:16:49.352808   50270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 21:16:49.487177   50270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 21:16:49.622551   50270 docker.go:219] disabling docker service ...
	I1207 21:16:49.622632   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 21:16:49.641913   50270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 21:16:49.655046   50270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 21:16:49.780471   50270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 21:16:49.903816   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 21:16:49.917447   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 21:16:49.939101   50270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1207 21:16:49.939170   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.949112   50270 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1207 21:16:49.949187   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.958706   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.968115   50270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 21:16:49.977516   50270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 21:16:49.987974   50270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 21:16:49.996996   50270 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1207 21:16:49.997069   50270 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1207 21:16:50.009736   50270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 21:16:50.018888   50270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 21:16:50.136461   50270 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 21:16:50.337931   50270 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 21:16:50.338013   50270 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 21:16:50.344175   50270 start.go:543] Will wait 60s for crictl version
	I1207 21:16:50.344237   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:50.348418   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 21:16:50.387227   50270 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1207 21:16:50.387329   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.439820   50270 ssh_runner.go:195] Run: crio --version
	I1207 21:16:50.492743   50270 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1207 21:16:48.431193   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.945823   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:47.635909   51037 pod_ready.go:102] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:49.635091   51037 pod_ready.go:92] pod "coredns-76f75df574-qfwbr" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:49.635119   51037 pod_ready.go:81] duration metric: took 4.022584638s waiting for pod "coredns-76f75df574-qfwbr" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:49.635139   51037 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:51.656178   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:50.494290   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetIP
	I1207 21:16:50.496890   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497226   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:16:50.497257   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:16:50.497557   50270 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1207 21:16:50.501988   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:50.516192   50270 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 21:16:50.516266   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:50.564641   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:50.564723   50270 ssh_runner.go:195] Run: which lz4
	I1207 21:16:50.569306   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 21:16:50.573458   50270 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 21:16:50.573483   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1207 21:16:52.405191   50270 crio.go:444] Took 1.835925 seconds to copy over tarball
	I1207 21:16:52.405260   50270 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 21:16:50.077304   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.602961   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.602994   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:54.603007   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:54.660014   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:16:54.660053   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:16:55.077712   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.102038   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.102068   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:55.577664   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:55.586714   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 21:16:55.586753   51113 api_server.go:103] status: https://192.168.39.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 21:16:56.077361   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:16:56.084665   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:16:56.096164   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:16:56.096196   51113 api_server.go:131] duration metric: took 6.520574302s to wait for apiserver health ...
	I1207 21:16:56.096209   51113 cni.go:84] Creating CNI manager for ""
	I1207 21:16:56.096219   51113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:53.431611   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.954091   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:53.656773   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:55.659213   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:56.811148   51113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:16:55.499497   50270 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094207903s)
	I1207 21:16:55.499524   50270 crio.go:451] Took 3.094311 seconds to extract the tarball
	I1207 21:16:55.499532   50270 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 21:16:55.539952   50270 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 21:16:55.612029   50270 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1207 21:16:55.612059   50270 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 21:16:55.612164   50270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1207 21:16:55.612282   50270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.612335   50270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.612216   50270 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.612433   50270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.612564   50270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.612575   50270 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614472   50270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1207 21:16:55.614496   50270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.614507   50270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.614513   50270 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.614571   50270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.614556   50270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.744531   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1207 21:16:55.744539   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.747157   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.748014   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.754498   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:55.778012   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:55.781417   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:55.886272   50270 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1207 21:16:55.886318   50270 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1207 21:16:55.886371   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.949015   50270 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1207 21:16:55.949128   50270 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:55.949205   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.963217   50270 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1207 21:16:55.963332   50270 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:55.963422   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:55.966733   50270 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1207 21:16:55.966854   50270 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1207 21:16:55.966934   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.004614   50270 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1207 21:16:56.004668   50270 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.004721   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.015557   50270 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1207 21:16:56.015655   50270 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.015714   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017603   50270 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1207 21:16:56.017643   50270 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.017686   50270 ssh_runner.go:195] Run: which crictl
	I1207 21:16:56.017817   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1207 21:16:56.017913   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1207 21:16:56.018011   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1207 21:16:56.018087   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1207 21:16:56.018160   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1207 21:16:56.028183   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1207 21:16:56.030370   50270 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1207 21:16:56.222552   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1207 21:16:56.222625   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1207 21:16:56.222673   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1207 21:16:56.222680   50270 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.222731   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1207 21:16:56.222828   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1207 21:16:56.222911   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1207 21:16:56.236367   50270 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1207 21:16:56.236387   50270 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236440   50270 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1207 21:16:56.236444   50270 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1207 21:16:56.455526   50270 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:58.094353   50270 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.638791166s)
	I1207 21:16:58.094525   50270 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.858047565s)
	I1207 21:16:58.094552   50270 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1207 21:16:58.094591   50270 cache_images.go:92] LoadImages completed in 2.482516651s
	W1207 21:16:58.094650   50270 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-9628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1207 21:16:58.094729   50270 ssh_runner.go:195] Run: crio config
	I1207 21:16:58.191059   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:16:58.191083   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:16:58.191108   50270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 21:16:58.191132   50270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.171 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-483745 NodeName:old-k8s-version-483745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 21:16:58.191279   50270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-483745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-483745
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.171:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 21:16:58.191389   50270 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-483745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 21:16:58.191462   50270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1207 21:16:58.204882   50270 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 21:16:58.204948   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 21:16:58.217370   50270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1207 21:16:58.237205   50270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 21:16:58.256539   50270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1207 21:16:58.276428   50270 ssh_runner.go:195] Run: grep 192.168.61.171	control-plane.minikube.internal$ /etc/hosts
	I1207 21:16:58.281568   50270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 21:16:58.295073   50270 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745 for IP: 192.168.61.171
	I1207 21:16:58.295112   50270 certs.go:190] acquiring lock for shared ca certs: {Name:mk2428ff8af158988e6eadcaadcc4f70bec0adab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:58.295295   50270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key
	I1207 21:16:58.295368   50270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key
	I1207 21:16:58.295493   50270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.key
	I1207 21:16:58.295589   50270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key.13a54c20
	I1207 21:16:58.295658   50270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key
	I1207 21:16:58.295817   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem (1338 bytes)
	W1207 21:16:58.295861   50270 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840_empty.pem, impossibly tiny 0 bytes
	I1207 21:16:58.295887   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 21:16:58.295922   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/ca.pem (1082 bytes)
	I1207 21:16:58.295972   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/cert.pem (1123 bytes)
	I1207 21:16:58.296012   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/certs/home/jenkins/minikube-integration/17719-9628/.minikube/certs/key.pem (1675 bytes)
	I1207 21:16:58.296067   50270 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem (1708 bytes)
	I1207 21:16:58.296936   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 21:16:58.327708   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 21:16:58.354646   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 21:16:58.379025   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 21:16:58.404362   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 21:16:58.433648   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 21:16:58.459739   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 21:16:58.487457   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 21:16:58.516507   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 21:16:57.214999   51113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:16:57.244196   51113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:16:57.264778   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:16:57.978177   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:16:57.978214   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 21:16:57.978224   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 21:16:57.978232   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 21:16:57.978241   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 21:16:57.978248   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 21:16:57.978254   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 21:16:57.978261   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:16:57.978267   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:16:57.978276   51113 system_pods.go:74] duration metric: took 713.475246ms to wait for pod list to return data ...
	I1207 21:16:57.978285   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:16:57.983354   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:16:57.983379   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:16:57.983389   51113 node_conditions.go:105] duration metric: took 5.099916ms to run NodePressure ...
	I1207 21:16:57.983403   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:16:58.583287   51113 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590472   51113 kubeadm.go:787] kubelet initialised
	I1207 21:16:58.590500   51113 kubeadm.go:788] duration metric: took 7.176115ms waiting for restarted kubelet to initialise ...
	I1207 21:16:58.590509   51113 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:58.597622   51113 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.609459   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609491   51113 pod_ready.go:81] duration metric: took 11.841558ms waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.609503   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.609513   51113 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.620143   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620172   51113 pod_ready.go:81] duration metric: took 10.647465ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.620185   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.620193   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.633821   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633850   51113 pod_ready.go:81] duration metric: took 13.645914ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.633864   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.633872   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.647333   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647359   51113 pod_ready.go:81] duration metric: took 13.477348ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.647373   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.647385   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:58.988420   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988448   51113 pod_ready.go:81] duration metric: took 341.054838ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:58.988457   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-proxy-nmx2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:58.988465   51113 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.388053   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388080   51113 pod_ready.go:81] duration metric: took 399.605098ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.388090   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.388097   51113 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.787887   51113 pod_ready.go:97] node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787913   51113 pod_ready.go:81] duration metric: took 399.809388ms waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:16:59.787925   51113 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-275828" hosting pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:16:59.787932   51113 pod_ready.go:38] duration metric: took 1.197413161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:16:59.787945   51113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:16:59.801806   51113 ops.go:34] apiserver oom_adj: -16
	I1207 21:16:59.801828   51113 kubeadm.go:640] restartCluster took 24.28763849s
	I1207 21:16:59.801837   51113 kubeadm.go:406] StartCluster complete in 24.334230687s
	I1207 21:16:59.801855   51113 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.801945   51113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:16:59.804179   51113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:16:59.804458   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:16:59.804515   51113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:16:59.804612   51113 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804638   51113 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.804646   51113 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:16:59.804695   51113 config.go:182] Loaded profile config "default-k8s-diff-port-275828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:16:59.804714   51113 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.804727   51113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-275828"
	I1207 21:16:59.804704   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805119   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805150   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805168   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805180   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.805204   51113 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-275828"
	I1207 21:16:59.805226   51113 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.805235   51113 addons.go:240] addon metrics-server should already be in state true
	I1207 21:16:59.805277   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.805627   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.805663   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.811657   51113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-275828" context rescaled to 1 replicas
	I1207 21:16:59.811696   51113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:16:59.814005   51113 out.go:177] * Verifying Kubernetes components...
	I1207 21:16:59.815636   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:16:59.822134   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1207 21:16:59.822558   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.822636   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
	I1207 21:16:59.822718   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I1207 21:16:59.823063   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823104   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823126   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823128   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.823479   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823605   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823619   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823636   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.823943   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.823970   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.824050   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824102   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.824193   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.824463   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.824502   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.828241   51113 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-275828"
	W1207 21:16:59.828264   51113 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:16:59.828292   51113 host.go:66] Checking if "default-k8s-diff-port-275828" exists ...
	I1207 21:16:59.828676   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.830577   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.841996   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1207 21:16:59.842283   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I1207 21:16:59.842697   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.842888   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.843254   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843277   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843391   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.843416   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.843638   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843779   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.843831   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.843973   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.845644   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.845852   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.847586   51113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:16:59.847253   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I1207 21:16:59.849062   51113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:16:57.998272   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:00.429603   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.850487   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:16:59.850500   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:16:59.850514   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849121   51113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:16:59.850564   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:16:59.850583   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.849452   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.851054   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.851071   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.851664   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.852274   51113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:16:59.852315   51113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:16:59.854738   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855190   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.855204   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.855394   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.855556   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.855649   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.855724   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.856210   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856582   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.856596   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.856720   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.856846   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.857188   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.857324   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.871856   51113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1207 21:16:59.872193   51113 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:16:59.872726   51113 main.go:141] libmachine: Using API Version  1
	I1207 21:16:59.872744   51113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:16:59.873088   51113 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:16:59.873243   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetState
	I1207 21:16:59.874542   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .DriverName
	I1207 21:16:59.874803   51113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:16:59.874821   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:16:59.874840   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHHostname
	I1207 21:16:59.877142   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877524   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:1f:c5", ip: ""} in network mk-default-k8s-diff-port-275828: {Iface:virbr1 ExpiryTime:2023-12-07 22:16:17 +0000 UTC Type:0 Mac:52:54:00:f3:1f:c5 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:default-k8s-diff-port-275828 Clientid:01:52:54:00:f3:1f:c5}
	I1207 21:16:59.877547   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | domain default-k8s-diff-port-275828 has defined IP address 192.168.39.254 and MAC address 52:54:00:f3:1f:c5 in network mk-default-k8s-diff-port-275828
	I1207 21:16:59.877753   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHPort
	I1207 21:16:59.877889   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHKeyPath
	I1207 21:16:59.878024   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .GetSSHUsername
	I1207 21:16:59.878137   51113 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/default-k8s-diff-port-275828/id_rsa Username:docker}
	I1207 21:16:59.983279   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:17:00.040397   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:17:00.056981   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:17:00.057008   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:17:00.078195   51113 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 21:17:00.078235   51113 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:00.117369   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:17:00.117399   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:17:00.177756   51113 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:00.177783   51113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:17:00.220667   51113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:17:01.338599   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.298167461s)
	I1207 21:17:01.338648   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338662   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338747   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.355434262s)
	I1207 21:17:01.338789   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338802   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.338925   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.338945   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.338960   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.338969   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340360   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340373   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340381   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340357   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340368   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340472   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.340490   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.340504   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.340785   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.340788   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.340804   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347722   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.347741   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.347933   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.347950   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.347968   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434021   51113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213311264s)
	I1207 21:17:01.434084   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434099   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434391   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434413   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434410   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) DBG | Closing plugin on server side
	I1207 21:17:01.434423   51113 main.go:141] libmachine: Making call to close driver server
	I1207 21:17:01.434434   51113 main.go:141] libmachine: (default-k8s-diff-port-275828) Calling .Close
	I1207 21:17:01.434627   51113 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:17:01.434637   51113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:17:01.434648   51113 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-275828"
	I1207 21:17:01.436476   51113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:16:57.997177   51037 pod_ready.go:102] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:59.154238   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.154261   51037 pod_ready.go:81] duration metric: took 9.519115953s waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.154270   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159402   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.159421   51037 pod_ready.go:81] duration metric: took 5.143876ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.159431   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164107   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.164124   51037 pod_ready.go:81] duration metric: took 4.684573ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.164134   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168711   51037 pod_ready.go:92] pod "kube-proxy-mzv22" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.168727   51037 pod_ready.go:81] duration metric: took 4.587318ms waiting for pod "kube-proxy-mzv22" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.168734   51037 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201648   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:16:59.201676   51037 pod_ready.go:81] duration metric: took 32.935891ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:16:59.201688   51037 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:01.509707   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:16:58.544765   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/certs/16840.pem --> /usr/share/ca-certificates/16840.pem (1338 bytes)
	I1207 21:16:58.571376   50270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/ssl/certs/168402.pem --> /usr/share/ca-certificates/168402.pem (1708 bytes)
	I1207 21:16:58.597700   50270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 21:16:58.616720   50270 ssh_runner.go:195] Run: openssl version
	I1207 21:16:58.622830   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168402.pem && ln -fs /usr/share/ca-certificates/168402.pem /etc/ssl/certs/168402.pem"
	I1207 21:16:58.634656   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640469   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:13 /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.640526   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168402.pem
	I1207 21:16:58.646624   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168402.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 21:16:58.660113   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 21:16:58.670742   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675735   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.675782   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 21:16:58.682821   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 21:16:58.696760   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16840.pem && ln -fs /usr/share/ca-certificates/16840.pem /etc/ssl/certs/16840.pem"
	I1207 21:16:58.710547   50270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.716983   50270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:13 /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.717048   50270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16840.pem
	I1207 21:16:58.724400   50270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16840.pem /etc/ssl/certs/51391683.0"
	I1207 21:16:58.736496   50270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 21:16:58.742587   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 21:16:58.750398   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 21:16:58.757537   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 21:16:58.764361   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 21:16:58.771280   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 21:16:58.778697   50270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 21:16:58.785873   50270 kubeadm.go:404] StartCluster: {Name:old-k8s-version-483745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-483745 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 21:16:58.786022   50270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 21:16:58.786079   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:16:58.834174   50270 cri.go:89] found id: ""
	I1207 21:16:58.834262   50270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 21:16:58.845932   50270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 21:16:58.845958   50270 kubeadm.go:636] restartCluster start
	I1207 21:16:58.846025   50270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 21:16:58.855982   50270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.857458   50270 kubeconfig.go:92] found "old-k8s-version-483745" server: "https://192.168.61.171:8443"
	I1207 21:16:58.860840   50270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 21:16:58.870183   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.870235   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.881631   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:58.881647   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:58.881693   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:58.892422   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.393094   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.393163   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.405578   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:16:59.893104   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:16:59.893160   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:16:59.906998   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.393560   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.393629   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.405837   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:00.893376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:00.893472   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:00.905785   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.393118   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.393204   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.405693   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.893214   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:01.893348   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:01.906272   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.392588   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.392682   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.404717   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:02.893325   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:02.893425   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:02.906705   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:03.392549   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.392627   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.406493   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:01.437892   51113 addons.go:502] enable addons completed in 1.633389199s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:17:02.198851   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:04.199518   51113 node_ready.go:58] node "default-k8s-diff-port-275828" has status "Ready":"False"
	I1207 21:17:02.931262   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.431344   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.509733   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:05.511779   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:03.892711   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:03.892814   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:03.905553   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.393144   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.393236   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.406280   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:04.893375   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:04.893459   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:04.905715   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.393376   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.393473   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.405757   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.892719   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:05.892800   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:05.906258   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.392706   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.392787   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.405913   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:06.893392   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:06.893475   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:06.908660   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.392944   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.393037   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.408113   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:07.892488   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:07.892602   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:07.905157   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:08.393126   50270 api_server.go:166] Checking apiserver status ...
	I1207 21:17:08.393209   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1207 21:17:08.405227   50270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1207 21:17:05.197790   51113 node_ready.go:49] node "default-k8s-diff-port-275828" has status "Ready":"True"
	I1207 21:17:05.197814   51113 node_ready.go:38] duration metric: took 5.119553512s waiting for node "default-k8s-diff-port-275828" to be "Ready" ...
	I1207 21:17:05.197825   51113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:17:05.204644   51113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:07.225887   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.229380   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:07.928733   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:09.929797   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.009114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:10.012079   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:08.870396   50270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1207 21:17:08.870427   50270 kubeadm.go:1135] stopping kube-system containers ...
	I1207 21:17:08.870439   50270 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1207 21:17:08.870496   50270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 21:17:08.914337   50270 cri.go:89] found id: ""
	I1207 21:17:08.914412   50270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 21:17:08.932406   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:17:08.941877   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:17:08.942012   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952016   50270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 21:17:08.952038   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.086175   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:09.811331   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.044161   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.117851   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:10.218309   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:17:10.218376   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.231007   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:10.754756   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.255150   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.755138   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:17:11.782482   50270 api_server.go:72] duration metric: took 1.564169408s to wait for apiserver process to appear ...
	I1207 21:17:11.782510   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:17:11.782543   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:11.729870   51113 pod_ready.go:102] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.727588   51113 pod_ready.go:92] pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.727621   51113 pod_ready.go:81] duration metric: took 7.52294973s waiting for pod "coredns-5dd5756b68-drrlk" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.727635   51113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733893   51113 pod_ready.go:92] pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.733936   51113 pod_ready.go:81] duration metric: took 6.276731ms waiting for pod "etcd-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.733951   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739431   51113 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.739456   51113 pod_ready.go:81] duration metric: took 5.495838ms waiting for pod "kube-apiserver-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.739467   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745435   51113 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.745456   51113 pod_ready.go:81] duration metric: took 5.98053ms waiting for pod "kube-controller-manager-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.745468   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751301   51113 pod_ready.go:92] pod "kube-proxy-nmx2z" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:12.751323   51113 pod_ready.go:81] duration metric: took 5.845741ms waiting for pod "kube-proxy-nmx2z" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:12.751333   51113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122896   51113 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace has status "Ready":"True"
	I1207 21:17:13.122923   51113 pod_ready.go:81] duration metric: took 371.582675ms waiting for pod "kube-scheduler-default-k8s-diff-port-275828" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:13.122936   51113 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	I1207 21:17:11.931676   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.433505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:12.510180   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:14.511615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.519216   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.783319   50270 api_server.go:269] stopped: https://192.168.61.171:8443/healthz: Get "https://192.168.61.171:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 21:17:16.783432   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.468175   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 21:17:17.468210   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 21:17:17.968919   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:17.975181   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:17.975206   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.469287   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.476311   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1207 21:17:18.476340   50270 api_server.go:103] status: https://192.168.61.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1207 21:17:18.968605   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:17:18.974285   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:17:18.981956   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:17:18.981983   50270 api_server.go:131] duration metric: took 7.199466057s to wait for apiserver health ...
	I1207 21:17:18.981994   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:17:18.982000   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:17:18.983962   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:17:15.433488   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:17.434321   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:16.931755   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.430606   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:19.010615   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.512114   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:18.985481   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:17:18.994841   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:17:19.015418   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:17:19.029654   50270 system_pods.go:59] 7 kube-system pods found
	I1207 21:17:19.029685   50270 system_pods.go:61] "coredns-5644d7b6d9-b8rqh" [5d8a0014-c012-4e9b-950a-44339be1d9ba] Running
	I1207 21:17:19.029692   50270 system_pods.go:61] "etcd-old-k8s-version-483745" [4a920248-1b35-4834-9e6f-a0e7567b5bb8] Running
	I1207 21:17:19.029699   50270 system_pods.go:61] "kube-apiserver-old-k8s-version-483745" [aaba6fb9-56a1-497d-a398-5c685f5500dd] Running
	I1207 21:17:19.029706   50270 system_pods.go:61] "kube-controller-manager-old-k8s-version-483745" [a13bda00-a0f4-4f59-8b52-65589579efcf] Running
	I1207 21:17:19.029711   50270 system_pods.go:61] "kube-proxy-wrl9t" [3fda8e7e-5f7a-44f5-b028-c6186b30c4b1] Running
	I1207 21:17:19.029715   50270 system_pods.go:61] "kube-scheduler-old-k8s-version-483745" [4fc3e12a-e294-457e-912f-0ed765ad4def] Running
	I1207 21:17:19.029718   50270 system_pods.go:61] "storage-provisioner" [42976d19-9d13-4d3d-832b-3427a68a1644] Running
	I1207 21:17:19.029726   50270 system_pods.go:74] duration metric: took 14.290629ms to wait for pod list to return data ...
	I1207 21:17:19.029739   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:17:19.033868   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:17:19.033897   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:17:19.033911   50270 node_conditions.go:105] duration metric: took 4.166175ms to run NodePressure ...
	I1207 21:17:19.033945   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 21:17:19.284413   50270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1207 21:17:19.288373   50270 retry.go:31] will retry after 182.556746ms: kubelet not initialised
	I1207 21:17:19.479987   50270 retry.go:31] will retry after 253.110045ms: kubelet not initialised
	I1207 21:17:19.744586   50270 retry.go:31] will retry after 608.133785ms: kubelet not initialised
	I1207 21:17:20.357758   50270 retry.go:31] will retry after 829.182382ms: kubelet not initialised
	I1207 21:17:21.192621   50270 retry.go:31] will retry after 998.365497ms: kubelet not initialised
	I1207 21:17:22.196882   50270 retry.go:31] will retry after 1.144379185s: kubelet not initialised
	I1207 21:17:23.346660   50270 retry.go:31] will retry after 4.175853771s: kubelet not initialised
	I1207 21:17:19.937119   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:22.433221   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:21.430858   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:23.929526   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:25.932244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:24.011486   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.509908   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.529200   50270 retry.go:31] will retry after 6.099259697s: kubelet not initialised
	I1207 21:17:24.932035   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:26.932432   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:28.935455   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:27.933244   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:30.431008   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:29.009917   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.509259   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:31.432441   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.933226   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:32.431713   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:34.931903   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.510686   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:35.511611   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:33.635018   50270 retry.go:31] will retry after 3.426713545s: kubelet not initialised
	I1207 21:17:37.067021   50270 retry.go:31] will retry after 7.020738309s: kubelet not initialised
	I1207 21:17:35.933872   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.432200   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:37.432208   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:39.432443   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:38.008964   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.013143   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:40.434554   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.935808   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:41.931614   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.431445   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:42.510798   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:45.010221   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:44.093245   50270 retry.go:31] will retry after 15.092242293s: kubelet not initialised
	I1207 21:17:45.433353   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.933249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:46.931078   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.430564   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:47.510355   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:50.010022   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:49.935001   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.433167   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:51.430664   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:53.431310   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.431508   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:52.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:55.010127   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:54.937299   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.432126   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.929516   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.929800   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:57.511723   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:00.010732   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:17:59.190582   50270 retry.go:31] will retry after 18.708242221s: kubelet not initialised
	I1207 21:17:59.932898   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.435773   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.429487   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.931336   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:02.011470   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.508873   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:06.510378   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:04.932311   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.434111   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:07.431033   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.931058   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.009614   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.009942   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:09.932527   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:11.933100   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.432890   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:12.429420   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:14.431778   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:13.010085   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:15.509812   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:17.907480   50270 kubeadm.go:787] kubelet initialised
	I1207 21:18:17.907516   50270 kubeadm.go:788] duration metric: took 58.6230723s waiting for restarted kubelet to initialise ...
	I1207 21:18:17.907523   50270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:18:17.912349   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917692   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.917710   50270 pod_ready.go:81] duration metric: took 5.339125ms waiting for pod "coredns-5644d7b6d9-b8rqh" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.917718   50270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923173   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.923192   50270 pod_ready.go:81] duration metric: took 5.469466ms waiting for pod "coredns-5644d7b6d9-cc8gx" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.923200   50270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928824   50270 pod_ready.go:92] pod "etcd-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.928846   50270 pod_ready.go:81] duration metric: took 5.638159ms waiting for pod "etcd-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.928856   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.934993   50270 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:17.935014   50270 pod_ready.go:81] duration metric: took 6.149728ms waiting for pod "kube-apiserver-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:17.935025   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311907   50270 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.311934   50270 pod_ready.go:81] duration metric: took 376.900024ms waiting for pod "kube-controller-manager-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.311947   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:16.931768   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:16.930954   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.932194   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.009341   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.010383   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:18.709795   50270 pod_ready.go:92] pod "kube-proxy-wrl9t" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:18.709818   50270 pod_ready.go:81] duration metric: took 397.865434ms waiting for pod "kube-proxy-wrl9t" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:18.709828   50270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107018   50270 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace has status "Ready":"True"
	I1207 21:18:19.107046   50270 pod_ready.go:81] duration metric: took 397.21085ms waiting for pod "kube-scheduler-old-k8s-version-483745" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:19.107074   50270 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	I1207 21:18:21.413113   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.414993   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:20.937780   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.432192   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:21.429764   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:23.430826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.930929   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:22.510894   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.009872   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.914333   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.914486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:25.432249   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.432529   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.930973   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.430718   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:27.510016   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.009983   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:30.415400   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.912237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:29.932694   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.433150   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.432680   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.931118   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:32.010572   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.508896   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.509628   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.913374   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:36.914250   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:34.933409   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.432655   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.432740   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:37.430165   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.930630   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:39.009629   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.009658   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:38.914325   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:40.915158   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.413980   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:41.932574   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.432525   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:42.431330   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:44.929635   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:43.009978   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.010954   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:45.414082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.415225   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:46.932342   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:48.932460   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.429890   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.931948   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:47.508820   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.508885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.510909   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:49.916969   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.414590   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:51.431888   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:53.432497   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:52.429836   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.429987   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.010442   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.520121   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:54.415187   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.914505   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:55.433372   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:57.437496   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:56.932937   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.430774   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.010885   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.510473   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.413820   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.413911   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.414163   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:18:59.932159   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.932344   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:04.432873   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:01.430926   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.930199   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.930253   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:03.511496   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.512541   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:05.913832   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.915554   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:06.433629   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.933148   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:07.931760   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.431655   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:08.009852   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.010279   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:10.415114   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.913846   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:11.433166   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:13.933572   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.930147   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.935480   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:12.010617   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:14.510815   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:15.414959   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.913372   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:16.433375   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:18.932915   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.436017   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.933613   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:17.008855   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.010583   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.510650   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:19.913760   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.913931   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:21.434113   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.932185   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:22.429942   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.432486   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:24.009731   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.513595   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:23.913964   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:25.915033   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.415173   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.433721   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:28.932763   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:26.934197   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.432795   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:29.008998   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.011163   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:30.912991   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:32.914672   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.432802   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.932626   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:31.930505   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:33.510138   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.010166   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:34.915019   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:37.414169   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:35.933595   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.432419   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:36.433061   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.929697   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.930753   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:38.509265   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.509898   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:39.414719   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:41.914208   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:40.932356   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.932643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.430519   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.930095   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:42.510763   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:44.511006   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:43.914874   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:46.414739   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:45.431904   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.932732   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.930507   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.930634   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:47.009537   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:49.009825   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.010633   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:48.914101   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.413288   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:50.433022   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:52.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:51.930920   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:54.433488   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.508693   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.509440   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:53.913446   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.914532   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.416064   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:55.432116   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:57.935271   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:56.929900   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.931501   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:19:58.009318   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.510190   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.915025   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.414806   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:00.432326   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:02.432758   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:04.434643   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:01.431826   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.931069   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.931648   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:03.010188   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.010498   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:05.914269   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.914640   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:06.931909   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.932549   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:08.431136   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.932438   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:07.509186   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:09.511791   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.415605   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.918130   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:10.934599   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.434477   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:13.430502   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.434943   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:12.008903   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:14.010390   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:16.509062   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.415237   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.914465   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:15.435338   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.933559   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:17.931293   50624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:18.408309   50624 pod_ready.go:81] duration metric: took 4m0.000858815s waiting for pod "metrics-server-57f55c9bc5-sndh4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:18.408355   50624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:18.408376   50624 pod_ready.go:38] duration metric: took 4m11.111070516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:18.408405   50624 kubeadm.go:640] restartCluster took 4m30.625453328s
	W1207 21:20:18.408479   50624 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:18.408513   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:18.510036   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:20.510485   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.915160   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:21.915544   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:19.940064   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:22.432481   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:24.432791   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.010158   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:25.509777   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:23.915685   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.414017   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.415525   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:26.435601   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.932153   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:28.009824   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.509369   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.372266   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.96372485s)
	I1207 21:20:32.372349   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:32.386002   50624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:20:32.395757   50624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:20:32.406709   50624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:20:32.406761   50624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:20:32.465707   50624 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 21:20:32.465842   50624 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:20:32.636031   50624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:20:32.636171   50624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:20:32.636296   50624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:20:32.892368   50624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:20:32.894341   50624 out.go:204]   - Generating certificates and keys ...
	I1207 21:20:32.894484   50624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:20:32.894581   50624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:20:32.894717   50624 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:20:32.894799   50624 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:20:32.895289   50624 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:20:32.895583   50624 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:20:32.896112   50624 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:20:32.896577   50624 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:20:32.897032   50624 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:20:32.897567   50624 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:20:32.897804   50624 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:20:32.897886   50624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:20:32.942322   50624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:20:33.084899   50624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:20:33.286309   50624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:20:33.482188   50624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:20:33.483077   50624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:20:33.487928   50624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:20:30.912937   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:32.914703   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:30.934926   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.431849   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:33.489853   50624 out.go:204]   - Booting up control plane ...
	I1207 21:20:33.490021   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:20:33.490177   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:20:33.490458   50624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:20:33.509319   50624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:20:33.509448   50624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:20:33.509501   50624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:20:33.654452   50624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:20:32.509729   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.510930   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:34.918486   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.414467   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:35.432767   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.931132   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:37.009506   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.011200   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.509897   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.657033   50624 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003082 seconds
	I1207 21:20:41.657193   50624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:20:41.673142   50624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:20:42.218438   50624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:20:42.218706   50624 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-598346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:20:42.745090   50624 kubeadm.go:322] [bootstrap-token] Using token: 74zooz.4uhmxlwojs4pjw69
	I1207 21:20:42.746934   50624 out.go:204]   - Configuring RBAC rules ...
	I1207 21:20:42.747111   50624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:20:42.762521   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:20:42.776210   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:20:42.781152   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:20:42.786698   50624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:20:42.795815   50624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:20:42.811407   50624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:20:43.073430   50624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:20:43.167611   50624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:20:43.168880   50624 kubeadm.go:322] 
	I1207 21:20:43.168970   50624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:20:43.169014   50624 kubeadm.go:322] 
	I1207 21:20:43.169111   50624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:20:43.169132   50624 kubeadm.go:322] 
	I1207 21:20:43.169163   50624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:20:43.169239   50624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:20:43.169314   50624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:20:43.169322   50624 kubeadm.go:322] 
	I1207 21:20:43.169394   50624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:20:43.169402   50624 kubeadm.go:322] 
	I1207 21:20:43.169475   50624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:20:43.169500   50624 kubeadm.go:322] 
	I1207 21:20:43.169591   50624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:20:43.169701   50624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:20:43.169799   50624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:20:43.169811   50624 kubeadm.go:322] 
	I1207 21:20:43.169930   50624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:20:43.170066   50624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:20:43.170078   50624 kubeadm.go:322] 
	I1207 21:20:43.170177   50624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170303   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:20:43.170332   50624 kubeadm.go:322] 	--control-plane 
	I1207 21:20:43.170338   50624 kubeadm.go:322] 
	I1207 21:20:43.170463   50624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:20:43.170474   50624 kubeadm.go:322] 
	I1207 21:20:43.170590   50624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 74zooz.4uhmxlwojs4pjw69 \
	I1207 21:20:43.170717   50624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:20:43.171438   50624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:20:43.171461   50624 cni.go:84] Creating CNI manager for ""
	I1207 21:20:43.171467   50624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:20:43.173556   50624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:20:39.415520   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.416257   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:39.933233   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:41.933860   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:44.432482   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.175267   50624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:20:43.199404   50624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:20:43.237091   50624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:20:43.237150   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.237203   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=embed-certs-598346 minikube.k8s.io/updated_at=2023_12_07T21_20_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.303369   50624 ops.go:34] apiserver oom_adj: -16
	I1207 21:20:43.670500   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.788364   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.394973   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:44.894494   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.394695   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:45.895141   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:43.509949   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.511007   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:43.915384   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:45.916082   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:47.916757   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.432649   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:48.434738   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:46.394706   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:46.894743   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.395117   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.894780   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.395408   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:48.895349   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.394860   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:49.894472   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.395102   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:50.895157   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:47.512284   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.011848   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.413787   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.913793   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:50.933240   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:52.935428   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:51.394691   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:51.895193   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.395131   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:52.894787   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.394652   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:53.895139   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.395160   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:54.895153   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.394410   50624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:20:55.584599   50624 kubeadm.go:1088] duration metric: took 12.347498848s to wait for elevateKubeSystemPrivileges.
	I1207 21:20:55.584628   50624 kubeadm.go:406] StartCluster complete in 5m7.857234007s
	I1207 21:20:55.584645   50624 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.584733   50624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:20:55.587311   50624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:20:55.587607   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:20:55.587630   50624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:20:55.587708   50624 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-598346"
	I1207 21:20:55.587716   50624 addons.go:69] Setting default-storageclass=true in profile "embed-certs-598346"
	I1207 21:20:55.587728   50624 addons.go:69] Setting metrics-server=true in profile "embed-certs-598346"
	I1207 21:20:55.587739   50624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-598346"
	I1207 21:20:55.587760   50624 addons.go:231] Setting addon metrics-server=true in "embed-certs-598346"
	W1207 21:20:55.587769   50624 addons.go:240] addon metrics-server should already be in state true
	I1207 21:20:55.587826   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587736   50624 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-598346"
	W1207 21:20:55.587852   50624 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:20:55.587901   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.587824   50624 config.go:182] Loaded profile config "embed-certs-598346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:20:55.588192   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588202   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588223   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588224   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.588284   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.588308   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.605717   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I1207 21:20:55.605750   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1207 21:20:55.605726   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1207 21:20:55.606254   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606305   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606338   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.606778   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606803   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606823   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606844   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.606826   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.606904   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.607178   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607218   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607274   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.607420   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.607776   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607816   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.607818   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.607849   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.610610   50624 addons.go:231] Setting addon default-storageclass=true in "embed-certs-598346"
	W1207 21:20:55.610628   50624 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:20:55.610647   50624 host.go:66] Checking if "embed-certs-598346" exists ...
	I1207 21:20:55.610902   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.610927   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.624530   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I1207 21:20:55.624997   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.625474   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.625492   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.625833   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.626016   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.626236   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1207 21:20:55.626715   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627093   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1207 21:20:55.627538   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.627700   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.627709   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628044   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.628061   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.628109   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.628112   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.629910   50624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:20:55.628721   50624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:20:55.628756   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.631270   50624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:20:55.631338   50624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.631357   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:20:55.631371   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.631724   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.634618   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.636632   50624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:20:55.635162   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.635740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.638311   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:20:55.638331   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:20:55.638354   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.638318   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.638427   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.638930   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.639110   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.639264   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.642987   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643401   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.643432   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.643605   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.643794   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.643947   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.644065   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.649214   50624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I1207 21:20:55.649604   50624 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:20:55.650085   50624 main.go:141] libmachine: Using API Version  1
	I1207 21:20:55.650106   50624 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:20:55.650583   50624 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:20:55.650740   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetState
	I1207 21:20:55.657356   50624 main.go:141] libmachine: (embed-certs-598346) Calling .DriverName
	I1207 21:20:55.657691   50624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.657708   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:20:55.657727   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHHostname
	I1207 21:20:55.659345   50624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-598346" context rescaled to 1 replicas
	I1207 21:20:55.659381   50624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:20:55.660949   50624 out.go:177] * Verifying Kubernetes components...
	I1207 21:20:55.662172   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:20:55.661748   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662288   50624 main.go:141] libmachine: (embed-certs-598346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:56:8f", ip: ""} in network mk-embed-certs-598346: {Iface:virbr4 ExpiryTime:2023-12-07 22:15:33 +0000 UTC Type:0 Mac:52:54:00:15:56:8f Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-598346 Clientid:01:52:54:00:15:56:8f}
	I1207 21:20:55.662323   50624 main.go:141] libmachine: (embed-certs-598346) DBG | domain embed-certs-598346 has defined IP address 192.168.72.180 and MAC address 52:54:00:15:56:8f in network mk-embed-certs-598346
	I1207 21:20:55.662617   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHPort
	I1207 21:20:55.662821   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHKeyPath
	I1207 21:20:55.662992   50624 main.go:141] libmachine: (embed-certs-598346) Calling .GetSSHUsername
	I1207 21:20:55.663175   50624 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/embed-certs-598346/id_rsa Username:docker}
	I1207 21:20:55.825166   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:20:55.850131   50624 node_ready.go:35] waiting up to 6m0s for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.850203   50624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:20:55.850365   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:20:55.850378   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:20:55.879031   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:20:55.896010   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:20:55.896034   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:20:55.910575   50624 node_ready.go:49] node "embed-certs-598346" has status "Ready":"True"
	I1207 21:20:55.910603   50624 node_ready.go:38] duration metric: took 60.438039ms waiting for node "embed-certs-598346" to be "Ready" ...
	I1207 21:20:55.910615   50624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:55.976847   50624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:55.976874   50624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:20:55.981345   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:56.068591   50624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:20:52.509374   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:55.012033   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:54.915300   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.414020   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.761169   50624 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761195   50624 pod_ready.go:81] duration metric: took 1.779826027s waiting for pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:57.761205   50624 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7cvcf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7cvcf" not found
	I1207 21:20:57.761212   50624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.813172   50624 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962919124s)
	I1207 21:20:58.813238   50624 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1207 21:20:58.813195   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.934130104s)
	I1207 21:20:58.813281   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813299   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813520   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.988311627s)
	I1207 21:20:58.813560   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813572   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813757   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.813776   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.813787   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.813796   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.813831   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.814066   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814093   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814097   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814110   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.814132   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.814152   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.814511   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.814531   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.839304   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.839329   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.839611   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.839653   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.839663   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.859922   50624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.791233211s)
	I1207 21:20:58.859979   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.859998   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860412   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860469   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860483   50624 main.go:141] libmachine: Making call to close driver server
	I1207 21:20:58.860495   50624 main.go:141] libmachine: (embed-certs-598346) Calling .Close
	I1207 21:20:58.860430   50624 main.go:141] libmachine: (embed-certs-598346) DBG | Closing plugin on server side
	I1207 21:20:58.860749   50624 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:20:58.860768   50624 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:20:58.860778   50624 addons.go:467] Verifying addon metrics-server=true in "embed-certs-598346"
	I1207 21:20:58.863874   50624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:20:55.431955   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:57.434174   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:58.865423   50624 addons.go:502] enable addons completed in 3.277791662s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:20:58.894841   50624 pod_ready.go:92] pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.894877   50624 pod_ready.go:81] duration metric: took 1.133651819s waiting for pod "coredns-5dd5756b68-nllk7" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.894891   50624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.906981   50624 pod_ready.go:92] pod "etcd-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.907009   50624 pod_ready.go:81] duration metric: took 12.109561ms waiting for pod "etcd-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.907020   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918176   50624 pod_ready.go:92] pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.918198   50624 pod_ready.go:81] duration metric: took 11.169952ms waiting for pod "kube-apiserver-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.918211   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928763   50624 pod_ready.go:92] pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:20:58.928791   50624 pod_ready.go:81] duration metric: took 10.570922ms waiting for pod "kube-controller-manager-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:20:58.928804   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163618   50624 pod_ready.go:92] pod "kube-proxy-h4pmv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.163652   50624 pod_ready.go:81] duration metric: took 1.234839709s waiting for pod "kube-proxy-h4pmv" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.163664   50624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455887   50624 pod_ready.go:92] pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:00.455909   50624 pod_ready.go:81] duration metric: took 292.236645ms waiting for pod "kube-scheduler-embed-certs-598346" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:00.455917   50624 pod_ready.go:38] duration metric: took 4.545291617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:00.455932   50624 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:00.455974   50624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:00.474126   50624 api_server.go:72] duration metric: took 4.814712718s to wait for apiserver process to appear ...
	I1207 21:21:00.474151   50624 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:00.474170   50624 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1207 21:21:00.480909   50624 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1207 21:21:00.482468   50624 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:00.482491   50624 api_server.go:131] duration metric: took 8.332499ms to wait for apiserver health ...
	I1207 21:21:00.482500   50624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:00.658932   50624 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:00.658965   50624 system_pods.go:61] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:00.658973   50624 system_pods.go:61] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:00.658980   50624 system_pods.go:61] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:00.658986   50624 system_pods.go:61] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:00.658992   50624 system_pods.go:61] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:00.658999   50624 system_pods.go:61] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:00.659010   50624 system_pods.go:61] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:00.659018   50624 system_pods.go:61] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:00.659036   50624 system_pods.go:74] duration metric: took 176.530206ms to wait for pod list to return data ...
	I1207 21:21:00.659049   50624 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:00.853965   50624 default_sa.go:45] found service account: "default"
	I1207 21:21:00.853997   50624 default_sa.go:55] duration metric: took 194.939162ms for default service account to be created ...
	I1207 21:21:00.854008   50624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:01.058565   50624 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:01.058594   50624 system_pods.go:89] "coredns-5dd5756b68-nllk7" [89c53a27-fa3e-40e9-b180-1bb6ae5c7b62] Running
	I1207 21:21:01.058600   50624 system_pods.go:89] "etcd-embed-certs-598346" [a837c9ba-7a9d-4c61-9474-160ff283b42e] Running
	I1207 21:21:01.058604   50624 system_pods.go:89] "kube-apiserver-embed-certs-598346" [d65bb254-2c09-49c3-98a8-651f580e5f3d] Running
	I1207 21:21:01.058609   50624 system_pods.go:89] "kube-controller-manager-embed-certs-598346" [307a7c5c-0579-4c3c-a84f-e99d61dd8722] Running
	I1207 21:21:01.058613   50624 system_pods.go:89] "kube-proxy-h4pmv" [2d3cc315-efaf-47b9-86e3-851cc930461b] Running
	I1207 21:21:01.058617   50624 system_pods.go:89] "kube-scheduler-embed-certs-598346" [43983338-9029-4240-9b20-b23f64f6880c] Running
	I1207 21:21:01.058634   50624 system_pods.go:89] "metrics-server-57f55c9bc5-pstg2" [463b12c8-de62-4ff8-a5c4-55eeb721eea8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:01.058640   50624 system_pods.go:89] "storage-provisioner" [838eb0e1-6b6d-4bae-aaaf-b8d8d80c5a14] Running
	I1207 21:21:01.058651   50624 system_pods.go:126] duration metric: took 204.636417ms to wait for k8s-apps to be running ...
	I1207 21:21:01.058664   50624 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:01.058707   50624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:01.081694   50624 system_svc.go:56] duration metric: took 23.018184ms WaitForService to wait for kubelet.
	I1207 21:21:01.081719   50624 kubeadm.go:581] duration metric: took 5.422310896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:01.081736   50624 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:01.254804   50624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:01.254838   50624 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:01.254851   50624 node_conditions.go:105] duration metric: took 173.110501ms to run NodePressure ...
	I1207 21:21:01.254866   50624 start.go:228] waiting for startup goroutines ...
	I1207 21:21:01.254875   50624 start.go:233] waiting for cluster config update ...
	I1207 21:21:01.254888   50624 start.go:242] writing updated cluster config ...
	I1207 21:21:01.255260   50624 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:01.312696   50624 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:01.314740   50624 out.go:177] * Done! kubectl is now configured to use "embed-certs-598346" cluster and "default" namespace by default
	I1207 21:20:57.510167   51037 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.202324   51037 pod_ready.go:81] duration metric: took 4m0.000618876s waiting for pod "metrics-server-57f55c9bc5-fv8x4" in "kube-system" namespace to be "Ready" ...
	E1207 21:20:59.202361   51037 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:20:59.202386   51037 pod_ready.go:38] duration metric: took 4m13.59894194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:20:59.202417   51037 kubeadm.go:640] restartCluster took 4m34.848470509s
	W1207 21:20:59.202490   51037 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:20:59.202525   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:20:59.416072   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.416132   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:20:59.932924   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:01.933678   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:04.432068   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:03.914100   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.414149   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:06.432277   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.432456   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:08.914660   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:10.927167   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.414941   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.233635   51037 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.031083103s)
	I1207 21:21:13.233717   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:13.246941   51037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:21:13.256697   51037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:21:13.265143   51037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:21:13.265188   51037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 21:21:13.323766   51037 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1207 21:21:13.323875   51037 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:21:13.477749   51037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:21:13.477938   51037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:21:13.478083   51037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:21:13.750607   51037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:21:13.752541   51037 out.go:204]   - Generating certificates and keys ...
	I1207 21:21:13.752655   51037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:21:13.752735   51037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:21:13.752887   51037 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:21:13.753031   51037 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:21:13.753250   51037 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:21:13.753432   51037 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:21:13.753647   51037 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:21:13.753850   51037 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:21:13.754167   51037 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:21:13.755114   51037 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:21:13.755889   51037 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:21:13.756020   51037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:21:13.859938   51037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:21:14.193613   51037 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 21:21:14.239766   51037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:21:14.448306   51037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:21:14.537558   51037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:21:14.538242   51037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:21:14.542910   51037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:21:10.432632   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:12.932769   51113 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:13.123869   51113 pod_ready.go:81] duration metric: took 4m0.000917841s waiting for pod "metrics-server-57f55c9bc5-qvq95" in "kube-system" namespace to be "Ready" ...
	E1207 21:21:13.123898   51113 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:21:13.123907   51113 pod_ready.go:38] duration metric: took 4m7.926070649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:13.123923   51113 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:13.123951   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:13.124010   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:13.197887   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.197918   51113 cri.go:89] found id: ""
	I1207 21:21:13.197947   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:13.198016   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.203887   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:13.203953   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:13.250727   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:13.250754   51113 cri.go:89] found id: ""
	I1207 21:21:13.250766   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:13.250823   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.255837   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:13.255881   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:13.297690   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:13.297719   51113 cri.go:89] found id: ""
	I1207 21:21:13.297729   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:13.297786   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.303238   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:13.303301   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:13.349838   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:13.349879   51113 cri.go:89] found id: ""
	I1207 21:21:13.349890   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:13.349960   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.354368   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:13.354423   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:13.394201   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.394230   51113 cri.go:89] found id: ""
	I1207 21:21:13.394240   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:13.394298   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.398418   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:13.398489   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:13.443027   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.443055   51113 cri.go:89] found id: ""
	I1207 21:21:13.443065   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:13.443129   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.447530   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:13.447601   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:13.491670   51113 cri.go:89] found id: ""
	I1207 21:21:13.491712   51113 logs.go:284] 0 containers: []
	W1207 21:21:13.491720   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:13.491735   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:13.491795   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:13.541386   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.541414   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:13.541421   51113 cri.go:89] found id: ""
	I1207 21:21:13.541430   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:13.541491   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.546270   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:13.551524   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:13.551549   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:13.630073   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:13.630119   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:13.680287   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:13.680318   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:13.733406   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:13.733442   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:13.751810   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:13.751845   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:13.905859   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:13.905889   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:13.950595   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:13.950626   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:13.993833   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:13.993862   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:14.488205   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:14.488242   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:14.531169   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:14.531201   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:14.588229   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:14.588268   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:14.642280   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:14.642310   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:14.693027   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:14.693062   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:14.544787   51037 out.go:204]   - Booting up control plane ...
	I1207 21:21:14.544925   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:21:14.545032   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:21:14.545988   51037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:21:14.565092   51037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:21:14.566289   51037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:21:14.566356   51037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 21:21:14.723698   51037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:21:15.913198   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.914942   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:17.234321   51113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:17.253156   51113 api_server.go:72] duration metric: took 4m17.441427611s to wait for apiserver process to appear ...
	I1207 21:21:17.253187   51113 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:17.253223   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:17.253330   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:17.301526   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.301557   51113 cri.go:89] found id: ""
	I1207 21:21:17.301573   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:17.301631   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.306049   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:17.306124   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:17.359167   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:17.359195   51113 cri.go:89] found id: ""
	I1207 21:21:17.359205   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:17.359264   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.363853   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:17.363919   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:17.403245   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:17.403271   51113 cri.go:89] found id: ""
	I1207 21:21:17.403281   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:17.403345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.407694   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:17.407771   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:17.462260   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.462287   51113 cri.go:89] found id: ""
	I1207 21:21:17.462298   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:17.462355   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.467157   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:17.467214   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:17.502206   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:17.502236   51113 cri.go:89] found id: ""
	I1207 21:21:17.502246   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:17.502301   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.507601   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:17.507672   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:17.550248   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:17.550275   51113 cri.go:89] found id: ""
	I1207 21:21:17.550284   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:17.550345   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.554817   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:17.554879   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:17.595234   51113 cri.go:89] found id: ""
	I1207 21:21:17.595262   51113 logs.go:284] 0 containers: []
	W1207 21:21:17.595272   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:17.595280   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:17.595331   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:17.657464   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.657491   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.657501   51113 cri.go:89] found id: ""
	I1207 21:21:17.657511   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:17.657566   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.662364   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:17.667878   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:17.667901   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:17.716160   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:17.716187   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:17.770503   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:17.770548   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:17.836877   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:17.836933   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:17.881499   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:17.881536   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:17.930792   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:17.930837   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:17.945486   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:17.945519   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:18.087782   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:18.087825   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:18.149272   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:18.149312   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:18.196792   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:18.196829   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:18.243539   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:18.243575   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:18.305424   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:18.305465   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:18.772176   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:18.772213   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:19.916426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.414318   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:22.728616   51037 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002882 seconds
	I1207 21:21:22.745711   51037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:21:22.772747   51037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:21:23.310807   51037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:21:23.311004   51037 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-950431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 21:21:23.826933   51037 kubeadm.go:322] [bootstrap-token] Using token: ft70hz.nx8ps5rcldht4kzk
	I1207 21:21:23.828530   51037 out.go:204]   - Configuring RBAC rules ...
	I1207 21:21:23.828676   51037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:21:23.836739   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 21:21:23.845207   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:21:23.852566   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:21:23.856912   51037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:21:23.863418   51037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:21:23.881183   51037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 21:21:24.185664   51037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:21:24.246564   51037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:21:24.246626   51037 kubeadm.go:322] 
	I1207 21:21:24.246741   51037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:21:24.246761   51037 kubeadm.go:322] 
	I1207 21:21:24.246858   51037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:21:24.246868   51037 kubeadm.go:322] 
	I1207 21:21:24.246898   51037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:21:24.246967   51037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:21:24.247047   51037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:21:24.247063   51037 kubeadm.go:322] 
	I1207 21:21:24.247122   51037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 21:21:24.247132   51037 kubeadm.go:322] 
	I1207 21:21:24.247183   51037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 21:21:24.247193   51037 kubeadm.go:322] 
	I1207 21:21:24.247259   51037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:21:24.247361   51037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:21:24.247450   51037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:21:24.247461   51037 kubeadm.go:322] 
	I1207 21:21:24.247565   51037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 21:21:24.247669   51037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:21:24.247678   51037 kubeadm.go:322] 
	I1207 21:21:24.247777   51037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.247910   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:21:24.247941   51037 kubeadm.go:322] 	--control-plane 
	I1207 21:21:24.247951   51037 kubeadm.go:322] 
	I1207 21:21:24.248049   51037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:21:24.248059   51037 kubeadm.go:322] 
	I1207 21:21:24.248150   51037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft70hz.nx8ps5rcldht4kzk \
	I1207 21:21:24.248271   51037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:21:24.249001   51037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:21:24.249031   51037 cni.go:84] Creating CNI manager for ""
	I1207 21:21:24.249041   51037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:21:24.250938   51037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:21:21.338084   51113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8444/healthz ...
	I1207 21:21:21.343250   51113 api_server.go:279] https://192.168.39.254:8444/healthz returned 200:
	ok
	I1207 21:21:21.344871   51113 api_server.go:141] control plane version: v1.28.4
	I1207 21:21:21.344892   51113 api_server.go:131] duration metric: took 4.091697961s to wait for apiserver health ...
	I1207 21:21:21.344901   51113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:21.344930   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 21:21:21.344990   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 21:21:21.385908   51113 cri.go:89] found id: "0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:21.385944   51113 cri.go:89] found id: ""
	I1207 21:21:21.385954   51113 logs.go:284] 1 containers: [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358]
	I1207 21:21:21.386011   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.390584   51113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 21:21:21.390655   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 21:21:21.435206   51113 cri.go:89] found id: "333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:21.435226   51113 cri.go:89] found id: ""
	I1207 21:21:21.435236   51113 logs.go:284] 1 containers: [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc]
	I1207 21:21:21.435294   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.441020   51113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 21:21:21.441091   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 21:21:21.480294   51113 cri.go:89] found id: "5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:21.480319   51113 cri.go:89] found id: ""
	I1207 21:21:21.480329   51113 logs.go:284] 1 containers: [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7]
	I1207 21:21:21.480384   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.484454   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 21:21:21.484511   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 21:21:21.531792   51113 cri.go:89] found id: "3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:21.531817   51113 cri.go:89] found id: ""
	I1207 21:21:21.531826   51113 logs.go:284] 1 containers: [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4]
	I1207 21:21:21.531884   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.536194   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 21:21:21.536265   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 21:21:21.579784   51113 cri.go:89] found id: "e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:21.579803   51113 cri.go:89] found id: ""
	I1207 21:21:21.579810   51113 logs.go:284] 1 containers: [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9]
	I1207 21:21:21.579852   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.583895   51113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 21:21:21.583961   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 21:21:21.623350   51113 cri.go:89] found id: "2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:21.623383   51113 cri.go:89] found id: ""
	I1207 21:21:21.623393   51113 logs.go:284] 1 containers: [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c]
	I1207 21:21:21.623450   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.628173   51113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 21:21:21.628226   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 21:21:21.670522   51113 cri.go:89] found id: ""
	I1207 21:21:21.670549   51113 logs.go:284] 0 containers: []
	W1207 21:21:21.670559   51113 logs.go:286] No container was found matching "kindnet"
	I1207 21:21:21.670565   51113 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 21:21:21.670622   51113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 21:21:21.717892   51113 cri.go:89] found id: "6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:21.717918   51113 cri.go:89] found id: "40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:21.717939   51113 cri.go:89] found id: ""
	I1207 21:21:21.717958   51113 logs.go:284] 2 containers: [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e]
	I1207 21:21:21.718024   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.724161   51113 ssh_runner.go:195] Run: which crictl
	I1207 21:21:21.728796   51113 logs.go:123] Gathering logs for dmesg ...
	I1207 21:21:21.728817   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 21:21:21.743574   51113 logs.go:123] Gathering logs for CRI-O ...
	I1207 21:21:21.743599   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 21:21:22.158202   51113 logs.go:123] Gathering logs for container status ...
	I1207 21:21:22.158247   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 21:21:22.224569   51113 logs.go:123] Gathering logs for describe nodes ...
	I1207 21:21:22.224610   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 21:21:22.376503   51113 logs.go:123] Gathering logs for coredns [5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7] ...
	I1207 21:21:22.376539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a99c774cf0049e1f437327753f2a5aa9b797ab268e679e422108b503088b7b7"
	I1207 21:21:22.421207   51113 logs.go:123] Gathering logs for kube-scheduler [3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4] ...
	I1207 21:21:22.421236   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d55aee82d6e74587fe5f5fd58ce02ad955a51125d1690919bfb738742c1e0f4"
	I1207 21:21:22.468100   51113 logs.go:123] Gathering logs for storage-provisioner [40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e] ...
	I1207 21:21:22.468130   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40b29d34e8a9e7a2d5e89f9962bf5c0caa876c5b1af4154e7ff4c1aa5d463d9e"
	I1207 21:21:22.514216   51113 logs.go:123] Gathering logs for kube-proxy [e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9] ...
	I1207 21:21:22.514246   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5f03abdf541c16051411c032670c8abded3f782a694ad6fa416eeed78cba0f9"
	I1207 21:21:22.563190   51113 logs.go:123] Gathering logs for kubelet ...
	I1207 21:21:22.563217   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 21:21:22.622636   51113 logs.go:123] Gathering logs for kube-apiserver [0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358] ...
	I1207 21:21:22.622673   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0127dcb687572cc484b00a07b48f7523f4537fe8e811d48782b8aeb78812c358"
	I1207 21:21:22.673280   51113 logs.go:123] Gathering logs for etcd [333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc] ...
	I1207 21:21:22.673309   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 333f8e7b3b0bae3d414ab276256dadafaecee7902de156ac266252d8fc2c14bc"
	I1207 21:21:22.724767   51113 logs.go:123] Gathering logs for kube-controller-manager [2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c] ...
	I1207 21:21:22.724799   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfc84b682d89088d217a11d362dc715d117db8af1add8d1b76a4a2e03ec7f4c"
	I1207 21:21:22.787505   51113 logs.go:123] Gathering logs for storage-provisioner [6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc] ...
	I1207 21:21:22.787539   51113 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d19830626a12de3b9a80f288bb0d9ba03c450dad0321ff3b262847dd140a0fc"
	I1207 21:21:25.337268   51113 system_pods.go:59] 8 kube-system pods found
	I1207 21:21:25.337297   51113 system_pods.go:61] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.337304   51113 system_pods.go:61] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.337312   51113 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.337319   51113 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.337325   51113 system_pods.go:61] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.337331   51113 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.337338   51113 system_pods.go:61] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.337347   51113 system_pods.go:61] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.337354   51113 system_pods.go:74] duration metric: took 3.99244703s to wait for pod list to return data ...
	I1207 21:21:25.337363   51113 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:25.340607   51113 default_sa.go:45] found service account: "default"
	I1207 21:21:25.340630   51113 default_sa.go:55] duration metric: took 3.261042ms for default service account to be created ...
	I1207 21:21:25.340637   51113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:25.351616   51113 system_pods.go:86] 8 kube-system pods found
	I1207 21:21:25.351640   51113 system_pods.go:89] "coredns-5dd5756b68-drrlk" [abdd350f-1ec9-42f2-aac8-63015e2f22c2] Running
	I1207 21:21:25.351646   51113 system_pods.go:89] "etcd-default-k8s-diff-port-275828" [035ea6fe-c094-4006-b09e-d7b78e71183a] Running
	I1207 21:21:25.351651   51113 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-275828" [65a7bab0-0808-4bbf-8a20-9698672c00b9] Running
	I1207 21:21:25.351656   51113 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-275828" [548e012a-ea9e-486f-a8a5-6bb2d9ed063a] Running
	I1207 21:21:25.351659   51113 system_pods.go:89] "kube-proxy-nmx2z" [1f466e5e-a6b2-4413-b456-7a90bc120735] Running
	I1207 21:21:25.351663   51113 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-275828" [d1cbd83a-aceb-40a0-afc5-b67d9c9af778] Running
	I1207 21:21:25.351670   51113 system_pods.go:89] "metrics-server-57f55c9bc5-qvq95" [ff9eb289-7fe2-4d11-a369-12b1c34a1937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:25.351675   51113 system_pods.go:89] "storage-provisioner" [adc81a49-dc39-4d36-8d28-f7f3d6a8cab5] Running
	I1207 21:21:25.351681   51113 system_pods.go:126] duration metric: took 11.04015ms to wait for k8s-apps to be running ...
	I1207 21:21:25.351686   51113 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:25.351725   51113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:25.368853   51113 system_svc.go:56] duration metric: took 17.156347ms WaitForService to wait for kubelet.
	I1207 21:21:25.368883   51113 kubeadm.go:581] duration metric: took 4m25.557159696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:25.368908   51113 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:25.372224   51113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:25.372247   51113 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:25.372257   51113 node_conditions.go:105] duration metric: took 3.343495ms to run NodePressure ...
	I1207 21:21:25.372268   51113 start.go:228] waiting for startup goroutines ...
	I1207 21:21:25.372273   51113 start.go:233] waiting for cluster config update ...
	I1207 21:21:25.372282   51113 start.go:242] writing updated cluster config ...
	I1207 21:21:25.372598   51113 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:25.426941   51113 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 21:21:25.429177   51113 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-275828" cluster and "default" namespace by default
	I1207 21:21:24.252623   51037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:21:24.278852   51037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:21:24.346081   51037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:21:24.346144   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.346161   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=no-preload-950431 minikube.k8s.io/updated_at=2023_12_07T21_21_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.458044   51037 ops.go:34] apiserver oom_adj: -16
	I1207 21:21:24.715413   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.801098   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.396467   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:25.895918   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:26.396185   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:24.914616   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.915500   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:26.896260   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.396455   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:27.896542   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.396551   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:28.896865   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.395921   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.896782   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.396223   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:30.896296   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:31.395834   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:29.414005   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.415580   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:31.896019   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.395959   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:32.895826   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.396820   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:33.896674   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.396109   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:34.896537   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.396438   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:35.896709   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.396689   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:36.896404   51037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:21:37.062200   51037 kubeadm.go:1088] duration metric: took 12.716124423s to wait for elevateKubeSystemPrivileges.
	I1207 21:21:37.062237   51037 kubeadm.go:406] StartCluster complete in 5m12.769835709s
	I1207 21:21:37.062255   51037 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.062333   51037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:21:37.064828   51037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:21:37.065103   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:21:37.065193   51037 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:21:37.065273   51037 addons.go:69] Setting storage-provisioner=true in profile "no-preload-950431"
	I1207 21:21:37.065291   51037 addons.go:231] Setting addon storage-provisioner=true in "no-preload-950431"
	W1207 21:21:37.065299   51037 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:21:37.065297   51037 addons.go:69] Setting default-storageclass=true in profile "no-preload-950431"
	I1207 21:21:37.065323   51037 config.go:182] Loaded profile config "no-preload-950431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:21:37.065329   51037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-950431"
	I1207 21:21:37.065349   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065302   51037 addons.go:69] Setting metrics-server=true in profile "no-preload-950431"
	I1207 21:21:37.065374   51037 addons.go:231] Setting addon metrics-server=true in "no-preload-950431"
	W1207 21:21:37.065388   51037 addons.go:240] addon metrics-server should already be in state true
	I1207 21:21:37.065423   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.065737   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065751   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.065780   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065772   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.065821   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.083129   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1207 21:21:37.083593   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I1207 21:21:37.083761   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084047   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084356   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I1207 21:21:37.084566   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084590   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084625   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.084645   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.084667   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.084935   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.084997   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085044   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.085065   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.085381   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.085505   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085542   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.085741   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.085909   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.085964   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.089134   51037 addons.go:231] Setting addon default-storageclass=true in "no-preload-950431"
	W1207 21:21:37.089153   51037 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:21:37.089180   51037 host.go:66] Checking if "no-preload-950431" exists ...
	I1207 21:21:37.089673   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.089712   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.101048   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35191
	I1207 21:21:37.101516   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.102279   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.102300   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.102727   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.103618   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.106122   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.107693   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I1207 21:21:37.107843   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I1207 21:21:37.108128   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108521   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.108696   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.108709   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.109070   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.109204   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.109227   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.114090   51037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:21:37.109833   51037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:21:37.109949   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.115707   51037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:21:37.115743   51037 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.115765   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:21:37.115789   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.116919   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.119056   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.120429   51037 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:21:37.121716   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:21:37.121741   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:21:37.121759   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.119470   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.121830   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.121852   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.120097   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.122062   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.122309   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.122432   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.124738   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.124992   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.125012   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.125346   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.125523   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.125647   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.125817   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.136943   51037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1207 21:21:37.137636   51037 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:21:37.138210   51037 main.go:141] libmachine: Using API Version  1
	I1207 21:21:37.138233   51037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:21:37.138659   51037 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:21:37.138896   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetState
	I1207 21:21:37.140541   51037 main.go:141] libmachine: (no-preload-950431) Calling .DriverName
	I1207 21:21:37.140792   51037 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.140808   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:21:37.140824   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHHostname
	I1207 21:21:37.144251   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144616   51037 main.go:141] libmachine: (no-preload-950431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:97:8f", ip: ""} in network mk-no-preload-950431: {Iface:virbr2 ExpiryTime:2023-12-07 22:15:53 +0000 UTC Type:0 Mac:52:54:00:80:97:8f Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:no-preload-950431 Clientid:01:52:54:00:80:97:8f}
	I1207 21:21:37.144667   51037 main.go:141] libmachine: (no-preload-950431) DBG | domain no-preload-950431 has defined IP address 192.168.50.100 and MAC address 52:54:00:80:97:8f in network mk-no-preload-950431
	I1207 21:21:37.144856   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHPort
	I1207 21:21:37.145009   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHKeyPath
	I1207 21:21:37.145167   51037 main.go:141] libmachine: (no-preload-950431) Calling .GetSSHUsername
	I1207 21:21:37.145260   51037 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/no-preload-950431/id_rsa Username:docker}
	I1207 21:21:37.157909   51037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-950431" context rescaled to 1 replicas
	I1207 21:21:37.157965   51037 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:21:37.159529   51037 out.go:177] * Verifying Kubernetes components...
	I1207 21:21:33.914686   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:35.916902   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:38.413489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:37.160895   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:37.329265   51037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:21:37.476842   51037 node_ready.go:35] waiting up to 6m0s for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481433   51037 node_ready.go:49] node "no-preload-950431" has status "Ready":"True"
	I1207 21:21:37.481456   51037 node_ready.go:38] duration metric: took 4.57457ms waiting for node "no-preload-950431" to be "Ready" ...
	I1207 21:21:37.481467   51037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:37.499564   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:37.556110   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:21:37.556142   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:21:37.558917   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:21:37.575696   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:21:37.653458   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:21:37.653478   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:21:37.782294   51037 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:37.782322   51037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:21:37.850657   51037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:21:38.161232   51037 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1207 21:21:38.734356   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.175402881s)
	I1207 21:21:38.734410   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734420   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734423   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.158690213s)
	I1207 21:21:38.734466   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734482   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734859   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734873   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.734860   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.734911   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.734927   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.734935   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.734913   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735006   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735016   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.735028   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.735166   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735192   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.735321   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.735357   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.735369   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:38.772677   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:38.772700   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:38.772969   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:38.773038   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:38.773055   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.056990   51037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206289914s)
	I1207 21:21:39.057048   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057064   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057441   51037 main.go:141] libmachine: (no-preload-950431) DBG | Closing plugin on server side
	I1207 21:21:39.057480   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057502   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057520   51037 main.go:141] libmachine: Making call to close driver server
	I1207 21:21:39.057534   51037 main.go:141] libmachine: (no-preload-950431) Calling .Close
	I1207 21:21:39.057809   51037 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:21:39.057826   51037 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:21:39.057845   51037 addons.go:467] Verifying addon metrics-server=true in "no-preload-950431"
	I1207 21:21:39.060003   51037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1207 21:21:39.061797   51037 addons.go:502] enable addons completed in 1.996609653s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1207 21:21:39.690111   51037 pod_ready.go:102] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:40.698712   51037 pod_ready.go:92] pod "coredns-76f75df574-cz2xd" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.698739   51037 pod_ready.go:81] duration metric: took 3.199144567s waiting for pod "coredns-76f75df574-cz2xd" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.698751   51037 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714087   51037 pod_ready.go:92] pod "coredns-76f75df574-hsjsq" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.714108   51037 pod_ready.go:81] duration metric: took 15.350128ms waiting for pod "coredns-76f75df574-hsjsq" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.714117   51037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725058   51037 pod_ready.go:92] pod "etcd-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.725078   51037 pod_ready.go:81] duration metric: took 10.955777ms waiting for pod "etcd-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.725089   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742099   51037 pod_ready.go:92] pod "kube-apiserver-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.742127   51037 pod_ready.go:81] duration metric: took 17.029172ms waiting for pod "kube-apiserver-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.742140   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748676   51037 pod_ready.go:92] pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:40.748699   51037 pod_ready.go:81] duration metric: took 6.549805ms waiting for pod "kube-controller-manager-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:40.748713   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988512   51037 pod_ready.go:92] pod "kube-proxy-6v8td" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:41.988537   51037 pod_ready.go:81] duration metric: took 1.239816309s waiting for pod "kube-proxy-6v8td" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:41.988545   51037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283301   51037 pod_ready.go:92] pod "kube-scheduler-no-preload-950431" in "kube-system" namespace has status "Ready":"True"
	I1207 21:21:42.283330   51037 pod_ready.go:81] duration metric: took 294.777559ms waiting for pod "kube-scheduler-no-preload-950431" in "kube-system" namespace to be "Ready" ...
	I1207 21:21:42.283341   51037 pod_ready.go:38] duration metric: took 4.801864648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:21:42.283360   51037 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:21:42.283420   51037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:21:42.308983   51037 api_server.go:72] duration metric: took 5.150987572s to wait for apiserver process to appear ...
	I1207 21:21:42.309013   51037 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:21:42.309036   51037 api_server.go:253] Checking apiserver healthz at https://192.168.50.100:8443/healthz ...
	I1207 21:21:42.315006   51037 api_server.go:279] https://192.168.50.100:8443/healthz returned 200:
	ok
	I1207 21:21:42.316220   51037 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 21:21:42.316240   51037 api_server.go:131] duration metric: took 7.219959ms to wait for apiserver health ...
	I1207 21:21:42.316247   51037 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:21:42.485186   51037 system_pods.go:59] 9 kube-system pods found
	I1207 21:21:42.485214   51037 system_pods.go:61] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.485218   51037 system_pods.go:61] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.485223   51037 system_pods.go:61] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.485228   51037 system_pods.go:61] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.485234   51037 system_pods.go:61] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.485237   51037 system_pods.go:61] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.485242   51037 system_pods.go:61] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.485251   51037 system_pods.go:61] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.485258   51037 system_pods.go:61] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.485278   51037 system_pods.go:74] duration metric: took 169.025303ms to wait for pod list to return data ...
	I1207 21:21:42.485287   51037 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:21:42.680542   51037 default_sa.go:45] found service account: "default"
	I1207 21:21:42.680569   51037 default_sa.go:55] duration metric: took 195.272707ms for default service account to be created ...
	I1207 21:21:42.680577   51037 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:21:42.890877   51037 system_pods.go:86] 9 kube-system pods found
	I1207 21:21:42.890927   51037 system_pods.go:89] "coredns-76f75df574-cz2xd" [5757c023-02cd-4be8-b4cc-6b45154f7b5a] Running
	I1207 21:21:42.890933   51037 system_pods.go:89] "coredns-76f75df574-hsjsq" [91f9ed18-c964-409d-9a58-7c84c62d51db] Running
	I1207 21:21:42.890938   51037 system_pods.go:89] "etcd-no-preload-950431" [c5480a67-a406-4014-bf13-3e4e970d528b] Running
	I1207 21:21:42.890942   51037 system_pods.go:89] "kube-apiserver-no-preload-950431" [73177a27-c561-4f5c-900a-80226abb7bf1] Running
	I1207 21:21:42.890946   51037 system_pods.go:89] "kube-controller-manager-no-preload-950431" [3e231c95-fb0b-4915-9ab0-45f35e7d6a2c] Running
	I1207 21:21:42.890950   51037 system_pods.go:89] "kube-proxy-6v8td" [268d28d1-60a9-4323-b36f-883388fbdcea] Running
	I1207 21:21:42.890954   51037 system_pods.go:89] "kube-scheduler-no-preload-950431" [a6767118-a858-439d-a58f-0e62b0b7442e] Running
	I1207 21:21:42.890960   51037 system_pods.go:89] "metrics-server-57f55c9bc5-ffkls" [e571e115-9e30-4be3-b77c-27db27a95feb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:21:42.890965   51037 system_pods.go:89] "storage-provisioner" [9400eb14-80e0-4725-906e-b80cd7e998a1] Running
	I1207 21:21:42.890973   51037 system_pods.go:126] duration metric: took 210.38383ms to wait for k8s-apps to be running ...
	I1207 21:21:42.890979   51037 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:21:42.891021   51037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:21:42.907279   51037 system_svc.go:56] duration metric: took 16.290689ms WaitForService to wait for kubelet.
	I1207 21:21:42.907306   51037 kubeadm.go:581] duration metric: took 5.749318034s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:21:42.907328   51037 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:21:43.081361   51037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:21:43.081390   51037 node_conditions.go:123] node cpu capacity is 2
	I1207 21:21:43.081401   51037 node_conditions.go:105] duration metric: took 174.067442ms to run NodePressure ...
	I1207 21:21:43.081412   51037 start.go:228] waiting for startup goroutines ...
	I1207 21:21:43.081420   51037 start.go:233] waiting for cluster config update ...
	I1207 21:21:43.081433   51037 start.go:242] writing updated cluster config ...
	I1207 21:21:43.081691   51037 ssh_runner.go:195] Run: rm -f paused
	I1207 21:21:43.131409   51037 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1207 21:21:43.133483   51037 out.go:177] * Done! kubectl is now configured to use "no-preload-950431" cluster and "default" namespace by default
	I1207 21:21:40.414676   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:42.913795   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:44.914599   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:47.414431   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:49.913391   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:51.914426   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:53.915196   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:55.923342   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:21:58.413783   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:00.414241   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:02.414435   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:04.913358   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:06.913909   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:08.915098   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:11.414320   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:13.414489   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:15.913521   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:18.415215   50270 pod_ready.go:102] pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:19.107244   50270 pod_ready.go:81] duration metric: took 4m0.000150933s waiting for pod "metrics-server-74d5856cc6-mbs6q" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:19.107300   50270 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1207 21:22:19.107323   50270 pod_ready.go:38] duration metric: took 4m1.199790563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:19.107355   50270 kubeadm.go:640] restartCluster took 5m20.261390035s
	W1207 21:22:19.107437   50270 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1207 21:22:19.107470   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 21:22:26.124587   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (7.017092462s)
	I1207 21:22:26.124664   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:26.139323   50270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 21:22:26.150243   50270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 21:22:26.164289   50270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 21:22:26.164356   50270 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 21:22:26.390137   50270 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 21:22:39.046001   50270 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1207 21:22:39.046063   50270 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 21:22:39.046164   50270 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 21:22:39.046322   50270 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 21:22:39.046454   50270 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 21:22:39.046581   50270 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 21:22:39.046685   50270 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 21:22:39.046759   50270 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1207 21:22:39.046836   50270 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 21:22:39.048426   50270 out.go:204]   - Generating certificates and keys ...
	I1207 21:22:39.048532   50270 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 21:22:39.048617   50270 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 21:22:39.048713   50270 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 21:22:39.048808   50270 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1207 21:22:39.048899   50270 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 21:22:39.048977   50270 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1207 21:22:39.049066   50270 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1207 21:22:39.049151   50270 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1207 21:22:39.049254   50270 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 21:22:39.049341   50270 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 21:22:39.049396   50270 kubeadm.go:322] [certs] Using the existing "sa" key
	I1207 21:22:39.049496   50270 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 21:22:39.049578   50270 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 21:22:39.049671   50270 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 21:22:39.049758   50270 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 21:22:39.049829   50270 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 21:22:39.049884   50270 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 21:22:39.051499   50270 out.go:204]   - Booting up control plane ...
	I1207 21:22:39.051604   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 21:22:39.051706   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 21:22:39.051778   50270 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 21:22:39.051841   50270 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 21:22:39.052043   50270 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 21:22:39.052137   50270 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502878 seconds
	I1207 21:22:39.052296   50270 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 21:22:39.052458   50270 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 21:22:39.052537   50270 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 21:22:39.052714   50270 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-483745 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 21:22:39.052802   50270 kubeadm.go:322] [bootstrap-token] Using token: 88595b.vk24k0k7lcyxvxlg
	I1207 21:22:39.054142   50270 out.go:204]   - Configuring RBAC rules ...
	I1207 21:22:39.054250   50270 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 21:22:39.054369   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 21:22:39.054470   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 21:22:39.054565   50270 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 21:22:39.054675   50270 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 21:22:39.054740   50270 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 21:22:39.054805   50270 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 21:22:39.054813   50270 kubeadm.go:322] 
	I1207 21:22:39.054905   50270 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 21:22:39.054917   50270 kubeadm.go:322] 
	I1207 21:22:39.054996   50270 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 21:22:39.055004   50270 kubeadm.go:322] 
	I1207 21:22:39.055031   50270 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 21:22:39.055107   50270 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 21:22:39.055174   50270 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 21:22:39.055187   50270 kubeadm.go:322] 
	I1207 21:22:39.055254   50270 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 21:22:39.055366   50270 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 21:22:39.055467   50270 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 21:22:39.055476   50270 kubeadm.go:322] 
	I1207 21:22:39.055565   50270 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1207 21:22:39.055655   50270 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 21:22:39.055663   50270 kubeadm.go:322] 
	I1207 21:22:39.055776   50270 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.055929   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 \
	I1207 21:22:39.055969   50270 kubeadm.go:322]     --control-plane 	  
	I1207 21:22:39.055979   50270 kubeadm.go:322] 
	I1207 21:22:39.056099   50270 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 21:22:39.056111   50270 kubeadm.go:322] 
	I1207 21:22:39.056215   50270 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88595b.vk24k0k7lcyxvxlg \
	I1207 21:22:39.056371   50270 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9735bcbc72dfd645f62a987fd793bf1e6c14637829ea0b0fc4774bee05e2bd50 
	I1207 21:22:39.056402   50270 cni.go:84] Creating CNI manager for ""
	I1207 21:22:39.056414   50270 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 21:22:39.058073   50270 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 21:22:39.059659   50270 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 21:22:39.078052   50270 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 21:22:39.118479   50270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 21:22:39.118540   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=old-k8s-version-483745 minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.118551   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.149391   50270 ops.go:34] apiserver oom_adj: -16
	I1207 21:22:39.334606   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:39.476182   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.075027   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:40.574693   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.074497   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:41.575214   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.075168   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:42.575162   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.074671   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:43.575406   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.074823   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:44.574597   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:45.575119   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.075437   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:46.575138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.075138   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:47.575171   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.074939   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:48.574679   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.075065   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:49.574571   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.074553   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:50.575129   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.075320   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:51.574806   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.075136   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:52.575144   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.075139   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:53.575394   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.075185   50270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 21:22:54.274051   50270 kubeadm.go:1088] duration metric: took 15.155559482s to wait for elevateKubeSystemPrivileges.
	I1207 21:22:54.274092   50270 kubeadm.go:406] StartCluster complete in 5m55.488226201s
	I1207 21:22:54.274140   50270 settings.go:142] acquiring lock: {Name:mk9396767607c5ff6b3d3a19e2271d9a7d1eb0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.274247   50270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:22:54.276679   50270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/kubeconfig: {Name:mke6a116815ad72dea31ccb8f27262944651c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 21:22:54.276902   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 21:22:54.276991   50270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 21:22:54.277064   50270 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277090   50270 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-483745"
	W1207 21:22:54.277103   50270 addons.go:240] addon storage-provisioner should already be in state true
	I1207 21:22:54.277101   50270 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277089   50270 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-483745"
	I1207 21:22:54.277116   50270 config.go:182] Loaded profile config "old-k8s-version-483745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1207 21:22:54.277127   50270 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-483745"
	I1207 21:22:54.277152   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277119   50270 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-483745"
	W1207 21:22:54.277169   50270 addons.go:240] addon metrics-server should already be in state true
	I1207 21:22:54.277208   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.277529   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277564   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277573   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277581   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.277591   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.277612   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.293696   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I1207 21:22:54.293908   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1207 21:22:54.294118   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.294622   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.294642   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.294656   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.295100   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.295119   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.295182   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295512   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.295671   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.295709   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1207 21:22:54.295752   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.295791   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.296131   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.296662   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.296681   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.297077   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.297597   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.297635   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.299605   50270 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-483745"
	W1207 21:22:54.299630   50270 addons.go:240] addon default-storageclass should already be in state true
	I1207 21:22:54.299658   50270 host.go:66] Checking if "old-k8s-version-483745" exists ...
	I1207 21:22:54.300047   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.300087   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.314531   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1207 21:22:54.315168   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.315718   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.315804   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1207 21:22:54.315809   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.316447   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.316491   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.316657   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.316979   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.317005   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.317340   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.317887   50270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:22:54.317945   50270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:22:54.319086   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.321272   50270 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1207 21:22:54.320074   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I1207 21:22:54.322834   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 21:22:54.322849   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 21:22:54.322863   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.323218   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.323677   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.323689   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.323997   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.324166   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.326460   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.328172   50270 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 21:22:54.327148   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.328366   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.329567   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.329588   50270 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.329593   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.329600   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 21:22:54.329613   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.329725   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.329909   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.330088   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.333435   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334161   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.334192   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.334480   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.334786   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.334959   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.335091   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.336340   50270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40483
	I1207 21:22:54.336672   50270 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:22:54.337021   50270 main.go:141] libmachine: Using API Version  1
	I1207 21:22:54.337034   50270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:22:54.337316   50270 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:22:54.337486   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetState
	I1207 21:22:54.338808   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .DriverName
	I1207 21:22:54.339043   50270 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.339053   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 21:22:54.339064   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHHostname
	I1207 21:22:54.341591   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.341937   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:35", ip: ""} in network mk-old-k8s-version-483745: {Iface:virbr3 ExpiryTime:2023-12-07 22:16:41 +0000 UTC Type:0 Mac:52:54:00:55:c8:35 Iaid: IPaddr:192.168.61.171 Prefix:24 Hostname:old-k8s-version-483745 Clientid:01:52:54:00:55:c8:35}
	I1207 21:22:54.341960   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | domain old-k8s-version-483745 has defined IP address 192.168.61.171 and MAC address 52:54:00:55:c8:35 in network mk-old-k8s-version-483745
	I1207 21:22:54.342127   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHPort
	I1207 21:22:54.342285   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHKeyPath
	I1207 21:22:54.342453   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .GetSSHUsername
	I1207 21:22:54.342592   50270 sshutil.go:53] new ssh client: &{IP:192.168.61.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/old-k8s-version-483745/id_rsa Username:docker}
	I1207 21:22:54.385908   50270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-483745" context rescaled to 1 replicas
	I1207 21:22:54.385959   50270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.171 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 21:22:54.387637   50270 out.go:177] * Verifying Kubernetes components...
	I1207 21:22:54.388616   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:22:54.604286   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 21:22:54.671574   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 21:22:54.671601   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1207 21:22:54.752688   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 21:22:54.752714   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 21:22:54.792943   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 21:22:54.847458   50270 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.847489   50270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 21:22:54.916698   50270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 21:22:54.931860   50270 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:54.931924   50270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 21:22:55.152010   50270 node_ready.go:49] node "old-k8s-version-483745" has status "Ready":"True"
	I1207 21:22:55.152041   50270 node_ready.go:38] duration metric: took 220.147741ms waiting for node "old-k8s-version-483745" to be "Ready" ...
	I1207 21:22:55.152055   50270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:55.356283   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:55.654243   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049922238s)
	I1207 21:22:55.654296   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654313   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.654661   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.654687   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.654694   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.654703   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.654715   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.655010   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.655052   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.693855   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.693876   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.694176   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.694197   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.927642   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.13465835s)
	I1207 21:22:55.927714   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.927731   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928056   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928076   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:55.928087   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:55.928096   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:55.928395   50270 main.go:141] libmachine: (old-k8s-version-483745) DBG | Closing plugin on server side
	I1207 21:22:55.928413   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:55.928428   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.033797   50270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117050773s)
	I1207 21:22:56.033845   50270 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.101898699s)
	I1207 21:22:56.033881   50270 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1207 21:22:56.033850   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.033918   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034207   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034220   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034229   50270 main.go:141] libmachine: Making call to close driver server
	I1207 21:22:56.034236   50270 main.go:141] libmachine: (old-k8s-version-483745) Calling .Close
	I1207 21:22:56.034460   50270 main.go:141] libmachine: Successfully made call to close driver server
	I1207 21:22:56.034480   50270 main.go:141] libmachine: Making call to close connection to plugin binary
	I1207 21:22:56.034516   50270 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-483745"
	I1207 21:22:56.036701   50270 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1207 21:22:56.038078   50270 addons.go:502] enable addons completed in 1.76109636s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1207 21:22:57.718454   50270 pod_ready.go:102] pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace has status "Ready":"False"
	I1207 21:22:58.708880   50270 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708910   50270 pod_ready.go:81] duration metric: took 3.352602717s waiting for pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace to be "Ready" ...
	E1207 21:22:58.708920   50270 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-jvh5w" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-jvh5w" not found
	I1207 21:22:58.708930   50270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715179   50270 pod_ready.go:92] pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.715205   50270 pod_ready.go:81] duration metric: took 6.268335ms waiting for pod "coredns-5644d7b6d9-zv7xv" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.715219   50270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720511   50270 pod_ready.go:92] pod "kube-proxy-42fzb" in "kube-system" namespace has status "Ready":"True"
	I1207 21:22:58.720526   50270 pod_ready.go:81] duration metric: took 5.302238ms waiting for pod "kube-proxy-42fzb" in "kube-system" namespace to be "Ready" ...
	I1207 21:22:58.720544   50270 pod_ready.go:38] duration metric: took 3.568467628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 21:22:58.720558   50270 api_server.go:52] waiting for apiserver process to appear ...
	I1207 21:22:58.720609   50270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 21:22:58.737687   50270 api_server.go:72] duration metric: took 4.351680673s to wait for apiserver process to appear ...
	I1207 21:22:58.737712   50270 api_server.go:88] waiting for apiserver healthz status ...
	I1207 21:22:58.737730   50270 api_server.go:253] Checking apiserver healthz at https://192.168.61.171:8443/healthz ...
	I1207 21:22:58.744722   50270 api_server.go:279] https://192.168.61.171:8443/healthz returned 200:
	ok
	I1207 21:22:58.745867   50270 api_server.go:141] control plane version: v1.16.0
	I1207 21:22:58.745887   50270 api_server.go:131] duration metric: took 8.167725ms to wait for apiserver health ...
	I1207 21:22:58.745897   50270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 21:22:58.750259   50270 system_pods.go:59] 4 kube-system pods found
	I1207 21:22:58.750278   50270 system_pods.go:61] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.750283   50270 system_pods.go:61] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.750292   50270 system_pods.go:61] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.750306   50270 system_pods.go:61] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.750319   50270 system_pods.go:74] duration metric: took 4.415504ms to wait for pod list to return data ...
	I1207 21:22:58.750328   50270 default_sa.go:34] waiting for default service account to be created ...
	I1207 21:22:58.753151   50270 default_sa.go:45] found service account: "default"
	I1207 21:22:58.753173   50270 default_sa.go:55] duration metric: took 2.836309ms for default service account to be created ...
	I1207 21:22:58.753181   50270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 21:22:58.757164   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.757188   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.757195   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.757212   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.757223   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.757246   50270 retry.go:31] will retry after 195.542562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:58.957411   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:58.957443   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:58.957451   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:58.957461   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:58.957471   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:58.957494   50270 retry.go:31] will retry after 294.291725ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.264559   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.264599   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.264608   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.264620   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.264632   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.264651   50270 retry.go:31] will retry after 392.704433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:22:59.663939   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:22:59.663967   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:22:59.663973   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:22:59.663979   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:22:59.663985   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 21:22:59.664003   50270 retry.go:31] will retry after 598.787872ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.268415   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.268441   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.268447   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.268453   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.268458   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.268472   50270 retry.go:31] will retry after 554.6659ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:00.829267   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:00.829293   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:00.829299   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:00.829305   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:00.829309   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:00.829325   50270 retry.go:31] will retry after 832.708436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:01.667497   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:01.667526   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:01.667532   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:01.667539   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:01.667543   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:01.667560   50270 retry.go:31] will retry after 824.504206ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:02.497009   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:02.497033   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:02.497038   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:02.497045   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:02.497049   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:02.497064   50270 retry.go:31] will retry after 1.335460815s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:03.837788   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:03.837816   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:03.837821   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:03.837828   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:03.837833   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:03.837848   50270 retry.go:31] will retry after 1.185883705s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:05.028679   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:05.028712   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:05.028721   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:05.028731   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:05.028738   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:05.028758   50270 retry.go:31] will retry after 2.162817833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:07.196435   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:07.196468   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:07.196476   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:07.196485   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:07.196493   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:07.196512   50270 retry.go:31] will retry after 2.853202831s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:10.054277   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:10.054303   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:10.054308   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:10.054315   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:10.054320   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:10.054335   50270 retry.go:31] will retry after 3.392213767s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:13.452019   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:13.452046   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:13.452052   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:13.452059   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:13.452064   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:13.452081   50270 retry.go:31] will retry after 3.42315118s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:16.882830   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:16.882856   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:16.882861   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:16.882868   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:16.882873   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:16.882887   50270 retry.go:31] will retry after 3.42232982s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:20.310740   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:20.310766   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:20.310771   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:20.310780   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:20.310785   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:20.310801   50270 retry.go:31] will retry after 6.110306117s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:26.426492   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:26.426520   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:26.426525   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:26.426532   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:26.426537   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:26.426554   50270 retry.go:31] will retry after 5.458076236s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:31.890544   50270 system_pods.go:86] 4 kube-system pods found
	I1207 21:23:31.890575   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:31.890580   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:31.890589   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:31.890593   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:31.890611   50270 retry.go:31] will retry after 10.030622922s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1207 21:23:41.928589   50270 system_pods.go:86] 6 kube-system pods found
	I1207 21:23:41.928622   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:41.928630   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:41.928637   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:41.928642   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:41.928651   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:41.928659   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:41.928677   50270 retry.go:31] will retry after 11.183539963s: missing components: kube-controller-manager, kube-scheduler
	I1207 21:23:53.119257   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:23:53.119284   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:23:53.119292   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:23:53.119298   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:23:53.119304   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Pending
	I1207 21:23:53.119309   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:23:53.119315   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:23:53.119325   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:23:53.119332   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:23:53.119353   50270 retry.go:31] will retry after 13.123307809s: missing components: kube-controller-manager
	I1207 21:24:06.249016   50270 system_pods.go:86] 8 kube-system pods found
	I1207 21:24:06.249042   50270 system_pods.go:89] "coredns-5644d7b6d9-zv7xv" [44eb0c7e-6ec5-4ff8-95f3-869272f00080] Running
	I1207 21:24:06.249048   50270 system_pods.go:89] "etcd-old-k8s-version-483745" [a275cfc0-7b07-4d83-832f-1b234599023e] Running
	I1207 21:24:06.249054   50270 system_pods.go:89] "kube-apiserver-old-k8s-version-483745" [0fd7361b-eb73-427e-beaa-e114a80963ae] Running
	I1207 21:24:06.249059   50270 system_pods.go:89] "kube-controller-manager-old-k8s-version-483745" [069a811c-4601-4e3c-bf64-77e4cf8d8e0e] Running
	I1207 21:24:06.249064   50270 system_pods.go:89] "kube-proxy-42fzb" [66e47a27-187e-4c1b-9d74-222927a4d2f8] Running
	I1207 21:24:06.249068   50270 system_pods.go:89] "kube-scheduler-old-k8s-version-483745" [1fa6f211-aa49-4ab9-ba1d-d613e7673ba8] Running
	I1207 21:24:06.249074   50270 system_pods.go:89] "metrics-server-74d5856cc6-tppp6" [9204fc2a-3771-4b93-9e41-faa1cf036232] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 21:24:06.249079   50270 system_pods.go:89] "storage-provisioner" [5497aade-c717-4eb1-8cfd-d8f91229656c] Running
	I1207 21:24:06.249087   50270 system_pods.go:126] duration metric: took 1m7.495900916s to wait for k8s-apps to be running ...
	I1207 21:24:06.249092   50270 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 21:24:06.249137   50270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 21:24:06.265801   50270 system_svc.go:56] duration metric: took 16.700976ms WaitForService to wait for kubelet.
	I1207 21:24:06.265820   50270 kubeadm.go:581] duration metric: took 1m11.879821949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 21:24:06.265837   50270 node_conditions.go:102] verifying NodePressure condition ...
	I1207 21:24:06.269326   50270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1207 21:24:06.269346   50270 node_conditions.go:123] node cpu capacity is 2
	I1207 21:24:06.269356   50270 node_conditions.go:105] duration metric: took 3.51576ms to run NodePressure ...
	I1207 21:24:06.269366   50270 start.go:228] waiting for startup goroutines ...
	I1207 21:24:06.269371   50270 start.go:233] waiting for cluster config update ...
	I1207 21:24:06.269384   50270 start.go:242] writing updated cluster config ...
	I1207 21:24:06.269660   50270 ssh_runner.go:195] Run: rm -f paused
	I1207 21:24:06.317992   50270 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1207 21:24:06.320122   50270 out.go:177] 
	W1207 21:24:06.321437   50270 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1207 21:24:06.322708   50270 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1207 21:24:06.324092   50270 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-483745" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Thu 2023-12-07 21:16:40 UTC, ends at Thu 2023-12-07 21:35:15 UTC. --
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.041742795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984915041724969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=d25c5958-07ac-40ea-9988-13a113818525 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.043039560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f53dcf4b-540d-4738-9848-b7656afad3a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.043088011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f53dcf4b-540d-4738-9848-b7656afad3a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.043264870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f53dcf4b-540d-4738-9848-b7656afad3a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.085715772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=64b1379e-0456-4528-a4ca-dd34cd686dc2 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.085822581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=64b1379e-0456-4528-a4ca-dd34cd686dc2 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.087897617Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=221021a9-30a3-46a3-be03-1f96417842fe name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.088487590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984915088467326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=221021a9-30a3-46a3-be03-1f96417842fe name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.089336768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e159acf3-0757-4128-a322-fa3ef64b1552 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.089383540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e159acf3-0757-4128-a322-fa3ef64b1552 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.089691476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e159acf3-0757-4128-a322-fa3ef64b1552 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.130657009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a9b28798-3ad1-40cb-a55e-dd4c92bf7863 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.130715789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a9b28798-3ad1-40cb-a55e-dd4c92bf7863 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.131837181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a7e275ed-c2c9-4ed0-b03a-451c95924717 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.132208121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984915132196570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a7e275ed-c2c9-4ed0-b03a-451c95924717 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.133028446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a32f159-b376-44f2-8aba-af16583c6557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.133076900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a32f159-b376-44f2-8aba-af16583c6557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.133238702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a32f159-b376-44f2-8aba-af16583c6557 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.176129906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3c49ee85-d6c9-43cb-ac61-88902fbd11c9 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.176191794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3c49ee85-d6c9-43cb-ac61-88902fbd11c9 name=/runtime.v1.RuntimeService/Version
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.178158047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b3fbc09c-82db-4a0d-bc9d-b3dbe7e4f3c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.178665102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701984915178649198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b3fbc09c-82db-4a0d-bc9d-b3dbe7e4f3c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.179265442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6be8aff2-c10e-46cf-8be5-ea249849e019 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.179324318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6be8aff2-c10e-46cf-8be5-ea249849e019 name=/runtime.v1.RuntimeService/ListContainers
	Dec 07 21:35:15 old-k8s-version-483745 crio[716]: time="2023-12-07 21:35:15.179487551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a,PodSandboxId:13a5a78f3280613f4ee4bad7497b66422b29b4e1c1bb5182824fa6aae420a06c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701984177630445265,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5497aade-c717-4eb1-8cfd-d8f91229656c,},Annotations:map[string]string{io.kubernetes.container.hash: 6cefebd1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7,PodSandboxId:88d22b1eab8503d2dbfa0027cf3b918283869f3bda1676f1a62cb3ef4adf8a19,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701984176165824480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42fzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e47a27-187e-4c1b-9d74-222927a4d2f8,},Annotations:map[string]string{io.kubernetes.container.hash: 52d7986e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07,PodSandboxId:4e1076c3f71771c1b2a43839bc07690a19d972c963722ff18bb750850d230eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701984174741472222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-zv7xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44eb0c7e-6ec5-4ff8-95f3-869272f00080,},Annotations:map[string]string{io.kubernetes.container.hash: f8acee80,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046,PodSandboxId:eb8dccc9edfd76d8fa04f2910918b57c2ed4e1824d53b1d0dce23c83e5d691da,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701984150450262474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0a20b7f23231c3534616bc9499b9e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 3289cd04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c,PodSandboxId:b20cab56fbb70274eea614b9a5225e7bafe2cb73450401f80e80a9475dfdbf46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701984149272121058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55,PodSandboxId:9948bd40f128298e2734ac22d31135b0bc6f4e961ec1b79bc51bdd803033f50c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701984149008239075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701984148430913138,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0,PodSandboxId:de1f3bc1e6e08a25512208b8ae422fe1c1f6acf0d6c3ecd334f7c283d2583808,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701983831233935451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-483745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156baefbb5614920114043110edcae59,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8a39f31e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6be8aff2-c10e-46cf-8be5-ea249849e019 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58a8ba392edec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   13a5a78f32806       storage-provisioner
	afc88f109dc54       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   88d22b1eab850       kube-proxy-42fzb
	d0b63104cb805       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   4e1076c3f7177       coredns-5644d7b6d9-zv7xv
	331295ad8803a       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   eb8dccc9edfd7       etcd-old-k8s-version-483745
	f867a27214531       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   b20cab56fbb70       kube-controller-manager-old-k8s-version-483745
	b4933bc8ac875       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   9948bd40f1282       kube-scheduler-old-k8s-version-483745
	e0cf790766fb6       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            1                   de1f3bc1e6e08       kube-apiserver-old-k8s-version-483745
	bc89bc78e6155       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   18 minutes ago      Exited              kube-apiserver            0                   de1f3bc1e6e08       kube-apiserver-old-k8s-version-483745
	
	* 
	* ==> coredns [d0b63104cb805ba9e0b90105331e31179e5f03f9bf8ca6b0664ff3ece5d42a07] <==
	* .:53
	2023-12-07T21:22:55.291Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-07T21:22:55.291Z [INFO] CoreDNS-1.6.2
	2023-12-07T21:22:55.291Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-07T21:23:23.498Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-483745
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-483745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=old-k8s-version-483745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T21_22_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 21:22:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 21:34:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 21:34:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 21:34:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 21:34:34 +0000   Thu, 07 Dec 2023 21:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.171
	  Hostname:    old-k8s-version-483745
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0503ac9ce1204b71b58758b2e780119d
	 System UUID:                0503ac9c-e120-4b71-b587-58b2e780119d
	 Boot ID:                    212aa850-f933-41b5-9d74-0efafc1dcbb0
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-zv7xv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-483745                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-483745             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-483745    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-42fzb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-483745             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-tppp6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet, old-k8s-version-483745     Node old-k8s-version-483745 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-483745  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 7 21:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069386] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.713381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.628672] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.587213] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.284385] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.130545] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.165726] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.127736] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.228557] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Dec 7 21:17] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.462761] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.525890] kauditd_printk_skb: 13 callbacks suppressed
	[Dec 7 21:18] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 7 21:22] systemd-fstab-generator[3144]: Ignoring "noauto" for root device
	[ +27.586538] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 7 21:23] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [331295ad8803ad3e15d8dcef37fc70853b317eb9c07a314aedb60d26833d9046] <==
	* 2023-12-07 21:22:30.552006 I | raft: 136fc2291504415a became follower at term 0
	2023-12-07 21:22:30.552018 I | raft: newRaft 136fc2291504415a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-07 21:22:30.552023 I | raft: 136fc2291504415a became follower at term 1
	2023-12-07 21:22:30.566443 W | auth: simple token is not cryptographically signed
	2023-12-07 21:22:30.571047 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-07 21:22:30.572394 I | etcdserver: 136fc2291504415a as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-07 21:22:30.573130 I | etcdserver/membership: added member 136fc2291504415a [https://192.168.61.171:2380] to cluster c5390b31b9ec6b0f
	2023-12-07 21:22:30.573888 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-07 21:22:30.574066 I | embed: listening for metrics on http://192.168.61.171:2381
	2023-12-07 21:22:30.574235 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-07 21:22:31.052750 I | raft: 136fc2291504415a is starting a new election at term 1
	2023-12-07 21:22:31.052891 I | raft: 136fc2291504415a became candidate at term 2
	2023-12-07 21:22:31.052904 I | raft: 136fc2291504415a received MsgVoteResp from 136fc2291504415a at term 2
	2023-12-07 21:22:31.053023 I | raft: 136fc2291504415a became leader at term 2
	2023-12-07 21:22:31.053031 I | raft: raft.node: 136fc2291504415a elected leader 136fc2291504415a at term 2
	2023-12-07 21:22:31.053306 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-07 21:22:31.055131 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-07 21:22:31.055289 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-07 21:22:31.055331 I | etcdserver: published {Name:old-k8s-version-483745 ClientURLs:[https://192.168.61.171:2379]} to cluster c5390b31b9ec6b0f
	2023-12-07 21:22:31.055350 I | embed: ready to serve client requests
	2023-12-07 21:22:31.055496 I | embed: ready to serve client requests
	2023-12-07 21:22:31.056853 I | embed: serving client requests on 192.168.61.171:2379
	2023-12-07 21:22:31.058877 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-07 21:32:31.079252 I | mvcc: store.index: compact 667
	2023-12-07 21:32:31.082211 I | mvcc: finished scheduled compaction at 667 (took 2.532887ms)
	
	* 
	* ==> kernel <==
	*  21:35:15 up 18 min,  0 users,  load average: 0.09, 0.09, 0.11
	Linux old-k8s-version-483745 5.10.57 #1 SMP Tue Dec 5 18:34:51 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bc89bc78e615568c4552af490164e6160551c5fefbcab838818bb1663ae3d8e0] <==
	* W1207 21:22:25.868022       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.876051       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.880318       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.911461       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.927216       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.928058       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.942936       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.945262       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.948081       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.949766       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.963329       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:25.998281       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.000478       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.009999       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.026139       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.026987       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.029166       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.042407       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.047806       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.066680       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.070070       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.083890       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.084728       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.087746       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1207 21:22:26.113518       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e0cf790766fb639ad04b45229aa80df91433cb199260f206a7c81d3870128023] <==
	* I1207 21:27:35.280102       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:27:35.280216       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:27:35.280287       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:27:35.280295       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:28:35.280813       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:28:35.281092       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:28:35.281198       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:28:35.281236       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:30:35.281796       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:30:35.282086       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:30:35.282174       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:30:35.282197       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:32:35.284460       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:32:35.284956       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:32:35.285190       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:32:35.285262       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1207 21:33:35.285631       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1207 21:33:35.285764       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 21:33:35.285831       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1207 21:33:35.285838       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f867a2721453142a19279352d199ffdf0ab052a5361866eb47e1c18452daba6c] <==
	* E1207 21:28:57.271334       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:29:18.316120       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:29:27.523374       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:29:50.318196       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:29:57.775288       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:30:22.320198       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:30:28.027400       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:30:54.322319       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:30:58.279208       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:31:26.324831       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:31:28.531344       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:31:58.327005       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:31:58.783482       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1207 21:32:29.035180       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:32:30.328867       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:32:59.287073       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:33:02.331234       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:33:29.538747       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:33:34.333088       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:33:59.790860       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:34:06.335998       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:34:30.043283       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:34:38.338120       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1207 21:35:00.295463       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1207 21:35:10.339905       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [afc88f109dc5433f238266a0a9e0b4eb39f762a085ae8473e064fadf7842e9f7] <==
	* W1207 21:22:56.475248       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1207 21:22:56.483269       1 node.go:135] Successfully retrieved node IP: 192.168.61.171
	I1207 21:22:56.483292       1 server_others.go:149] Using iptables Proxier.
	I1207 21:22:56.483618       1 server.go:529] Version: v1.16.0
	I1207 21:22:56.490877       1 config.go:313] Starting service config controller
	I1207 21:22:56.495960       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1207 21:22:56.491069       1 config.go:131] Starting endpoints config controller
	I1207 21:22:56.496145       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1207 21:22:56.596334       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1207 21:22:56.596408       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b4933bc8ac87550b10f3a57fdc04bb80ed9d40001b53169181f06c18a054ae55] <==
	* W1207 21:22:34.327913       1 authentication.go:79] Authentication is disabled
	I1207 21:22:34.327923       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1207 21:22:34.328613       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1207 21:22:34.387414       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:22:34.387942       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:22:34.388090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:22:34.388166       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:22:34.388270       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:22:34.388318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:22:34.388365       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:22:34.390333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 21:22:34.390581       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:34.391067       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:34.393750       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:22:35.389657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 21:22:35.390642       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 21:22:35.392175       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 21:22:35.393371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 21:22:35.394522       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 21:22:35.395284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 21:22:35.395852       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 21:22:35.398219       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 21:22:35.398671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 21:22:35.400818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 21:22:35.400824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 21:16:40 UTC, ends at Thu 2023-12-07 21:35:15 UTC. --
	Dec 07 21:30:36 old-k8s-version-483745 kubelet[3163]: E1207 21:30:36.996456    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:30:48 old-k8s-version-483745 kubelet[3163]: E1207 21:30:48.997397    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:01 old-k8s-version-483745 kubelet[3163]: E1207 21:31:01.996732    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:16 old-k8s-version-483745 kubelet[3163]: E1207 21:31:16.997012    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:30 old-k8s-version-483745 kubelet[3163]: E1207 21:31:30.996429    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:42 old-k8s-version-483745 kubelet[3163]: E1207 21:31:42.996496    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:31:53 old-k8s-version-483745 kubelet[3163]: E1207 21:31:53.996802    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:08 old-k8s-version-483745 kubelet[3163]: E1207 21:32:08.996697    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:22 old-k8s-version-483745 kubelet[3163]: E1207 21:32:22.996294    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:28 old-k8s-version-483745 kubelet[3163]: E1207 21:32:28.094403    3163 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 07 21:32:36 old-k8s-version-483745 kubelet[3163]: E1207 21:32:36.996258    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:32:49 old-k8s-version-483745 kubelet[3163]: E1207 21:32:49.997869    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:33:01 old-k8s-version-483745 kubelet[3163]: E1207 21:33:01.996882    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:33:16 old-k8s-version-483745 kubelet[3163]: E1207 21:33:16.996712    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:33:31 old-k8s-version-483745 kubelet[3163]: E1207 21:33:31.996280    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:33:46 old-k8s-version-483745 kubelet[3163]: E1207 21:33:46.027788    3163 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:33:46 old-k8s-version-483745 kubelet[3163]: E1207 21:33:46.027886    3163 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:33:46 old-k8s-version-483745 kubelet[3163]: E1207 21:33:46.027933    3163 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 07 21:33:46 old-k8s-version-483745 kubelet[3163]: E1207 21:33:46.027959    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 07 21:33:58 old-k8s-version-483745 kubelet[3163]: E1207 21:33:58.996790    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:34:12 old-k8s-version-483745 kubelet[3163]: E1207 21:34:12.000773    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:34:25 old-k8s-version-483745 kubelet[3163]: E1207 21:34:25.996648    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:34:37 old-k8s-version-483745 kubelet[3163]: E1207 21:34:37.996698    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:34:51 old-k8s-version-483745 kubelet[3163]: E1207 21:34:51.998031    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 07 21:35:04 old-k8s-version-483745 kubelet[3163]: E1207 21:35:04.996369    3163 pod_workers.go:191] Error syncing pod 9204fc2a-3771-4b93-9e41-faa1cf036232 ("metrics-server-74d5856cc6-tppp6_kube-system(9204fc2a-3771-4b93-9e41-faa1cf036232)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [58a8ba392edecb3d00810d90c19c371db1fc4a5035210547f76a909aef9f7b0a] <==
	* I1207 21:22:57.736932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 21:22:57.747979       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 21:22:57.748162       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 21:22:57.757720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 21:22:57.758013       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977!
	I1207 21:22:57.763723       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"600ed97f-c126-4743-b541-bd4ad57551d8", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977 became leader
	I1207 21:22:57.858514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-483745_9ebadb03-9f62-4e02-9ef3-3252c0fc4977!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-483745 -n old-k8s-version-483745
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-483745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-tppp6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6: exit status 1 (69.279806ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-tppp6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-483745 describe pod metrics-server-74d5856cc6-tppp6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (139.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-155321 --alsologtostderr -v=3
E1207 21:36:41.699568   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-155321 --alsologtostderr -v=3: exit status 82 (2m1.200442979s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-155321"  ...
	* Stopping node "newest-cni-155321"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:36:19.309116   56978 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:36:19.309276   56978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:36:19.309284   56978 out.go:309] Setting ErrFile to fd 2...
	I1207 21:36:19.309288   56978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:36:19.309503   56978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:36:19.309786   56978 out.go:303] Setting JSON to false
	I1207 21:36:19.309890   56978 mustload.go:65] Loading cluster: newest-cni-155321
	I1207 21:36:19.310417   56978 config.go:182] Loaded profile config "newest-cni-155321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:36:19.310506   56978 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/newest-cni-155321/config.json ...
	I1207 21:36:19.310714   56978 mustload.go:65] Loading cluster: newest-cni-155321
	I1207 21:36:19.310870   56978 config.go:182] Loaded profile config "newest-cni-155321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1207 21:36:19.310918   56978 stop.go:39] StopHost: newest-cni-155321
	I1207 21:36:19.311421   56978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:36:19.311469   56978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:36:19.326151   56978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I1207 21:36:19.326657   56978 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:36:19.327288   56978 main.go:141] libmachine: Using API Version  1
	I1207 21:36:19.327319   56978 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:36:19.327789   56978 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:36:19.330023   56978 out.go:177] * Stopping node "newest-cni-155321"  ...
	I1207 21:36:19.332583   56978 main.go:141] libmachine: Stopping "newest-cni-155321"...
	I1207 21:36:19.332615   56978 main.go:141] libmachine: (newest-cni-155321) Calling .GetState
	I1207 21:36:19.334641   56978 main.go:141] libmachine: (newest-cni-155321) Calling .Stop
	I1207 21:36:19.338513   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 0/60
	I1207 21:36:20.340610   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 1/60
	I1207 21:36:21.342071   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 2/60
	I1207 21:36:22.344445   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 3/60
	I1207 21:36:23.345654   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 4/60
	I1207 21:36:24.347954   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 5/60
	I1207 21:36:25.349946   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 6/60
	I1207 21:36:26.351412   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 7/60
	I1207 21:36:27.353237   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 8/60
	I1207 21:36:28.354871   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 9/60
	I1207 21:36:29.357301   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 10/60
	I1207 21:36:30.358755   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 11/60
	I1207 21:36:31.361012   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 12/60
	I1207 21:36:32.362370   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 13/60
	I1207 21:36:33.364484   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 14/60
	I1207 21:36:34.366338   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 15/60
	I1207 21:36:35.368485   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 16/60
	I1207 21:36:36.370297   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 17/60
	I1207 21:36:37.371613   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 18/60
	I1207 21:36:38.373733   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 19/60
	I1207 21:36:39.376255   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 20/60
	I1207 21:36:40.377676   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 21/60
	I1207 21:36:41.379173   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 22/60
	I1207 21:36:42.380552   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 23/60
	I1207 21:36:43.383035   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 24/60
	I1207 21:36:44.384938   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 25/60
	I1207 21:36:45.386460   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 26/60
	I1207 21:36:46.388370   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 27/60
	I1207 21:36:47.389570   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 28/60
	I1207 21:36:48.390807   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 29/60
	I1207 21:36:49.393347   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 30/60
	I1207 21:36:50.395309   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 31/60
	I1207 21:36:51.396906   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 32/60
	I1207 21:36:52.398363   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 33/60
	I1207 21:36:53.400429   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 34/60
	I1207 21:36:54.401715   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 35/60
	I1207 21:36:55.403050   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 36/60
	I1207 21:36:56.404462   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 37/60
	I1207 21:36:57.406253   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 38/60
	I1207 21:36:58.407838   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 39/60
	I1207 21:36:59.409802   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 40/60
	I1207 21:37:00.411808   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 41/60
	I1207 21:37:01.413229   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 42/60
	I1207 21:37:02.414637   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 43/60
	I1207 21:37:03.416012   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 44/60
	I1207 21:37:04.417949   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 45/60
	I1207 21:37:05.419879   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 46/60
	I1207 21:37:06.421375   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 47/60
	I1207 21:37:07.422817   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 48/60
	I1207 21:37:08.425388   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 49/60
	I1207 21:37:09.427302   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 50/60
	I1207 21:37:10.428827   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 51/60
	I1207 21:37:11.430360   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 52/60
	I1207 21:37:12.432635   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 53/60
	I1207 21:37:13.434672   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 54/60
	I1207 21:37:14.436540   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 55/60
	I1207 21:37:15.437872   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 56/60
	I1207 21:37:16.439626   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 57/60
	I1207 21:37:17.441232   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 58/60
	I1207 21:37:18.443265   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 59/60
	I1207 21:37:19.444612   56978 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:37:19.444709   56978 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:37:19.444736   56978 retry.go:31] will retry after 838.215248ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:37:20.283540   56978 stop.go:39] StopHost: newest-cni-155321
	I1207 21:37:20.284048   56978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 21:37:20.284102   56978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 21:37:20.302791   56978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I1207 21:37:20.303224   56978 main.go:141] libmachine: () Calling .GetVersion
	I1207 21:37:20.303946   56978 main.go:141] libmachine: Using API Version  1
	I1207 21:37:20.303975   56978 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 21:37:20.304376   56978 main.go:141] libmachine: () Calling .GetMachineName
	I1207 21:37:20.306575   56978 out.go:177] * Stopping node "newest-cni-155321"  ...
	I1207 21:37:20.307972   56978 main.go:141] libmachine: Stopping "newest-cni-155321"...
	I1207 21:37:20.307991   56978 main.go:141] libmachine: (newest-cni-155321) Calling .GetState
	I1207 21:37:20.309800   56978 main.go:141] libmachine: (newest-cni-155321) Calling .Stop
	I1207 21:37:20.313399   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 0/60
	I1207 21:37:21.314947   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 1/60
	I1207 21:37:22.316449   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 2/60
	I1207 21:37:23.317907   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 3/60
	I1207 21:37:24.319721   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 4/60
	I1207 21:37:25.321877   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 5/60
	I1207 21:37:26.323247   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 6/60
	I1207 21:37:27.324955   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 7/60
	I1207 21:37:28.326342   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 8/60
	I1207 21:37:29.327901   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 9/60
	I1207 21:37:30.330099   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 10/60
	I1207 21:37:31.332180   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 11/60
	I1207 21:37:32.333494   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 12/60
	I1207 21:37:33.335121   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 13/60
	I1207 21:37:34.336695   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 14/60
	I1207 21:37:35.338625   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 15/60
	I1207 21:37:36.340367   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 16/60
	I1207 21:37:37.341871   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 17/60
	I1207 21:37:38.347541   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 18/60
	I1207 21:37:39.349516   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 19/60
	I1207 21:37:40.351867   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 20/60
	I1207 21:37:41.353273   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 21/60
	I1207 21:37:42.354866   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 22/60
	I1207 21:37:43.356357   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 23/60
	I1207 21:37:44.357670   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 24/60
	I1207 21:37:45.359595   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 25/60
	I1207 21:37:46.360920   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 26/60
	I1207 21:37:47.362034   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 27/60
	I1207 21:37:48.364301   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 28/60
	I1207 21:37:49.365887   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 29/60
	I1207 21:37:50.367925   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 30/60
	I1207 21:37:51.369669   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 31/60
	I1207 21:37:52.371703   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 32/60
	I1207 21:37:53.373840   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 33/60
	I1207 21:37:54.375307   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 34/60
	I1207 21:37:55.377155   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 35/60
	I1207 21:37:56.379272   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 36/60
	I1207 21:37:57.381546   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 37/60
	I1207 21:37:58.383868   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 38/60
	I1207 21:37:59.385270   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 39/60
	I1207 21:38:00.387840   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 40/60
	I1207 21:38:01.389170   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 41/60
	I1207 21:38:02.390679   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 42/60
	I1207 21:38:03.392459   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 43/60
	I1207 21:38:04.394034   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 44/60
	I1207 21:38:05.396065   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 45/60
	I1207 21:38:06.397411   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 46/60
	I1207 21:38:07.398940   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 47/60
	I1207 21:38:08.400492   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 48/60
	I1207 21:38:09.402205   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 49/60
	I1207 21:38:10.404283   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 50/60
	I1207 21:38:11.405567   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 51/60
	I1207 21:38:12.407155   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 52/60
	I1207 21:38:13.408860   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 53/60
	I1207 21:38:14.410350   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 54/60
	I1207 21:38:15.412549   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 55/60
	I1207 21:38:16.414143   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 56/60
	I1207 21:38:17.415475   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 57/60
	I1207 21:38:18.417720   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 58/60
	I1207 21:38:19.419425   56978 main.go:141] libmachine: (newest-cni-155321) Waiting for machine to stop 59/60
	I1207 21:38:20.420705   56978 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1207 21:38:20.420746   56978 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1207 21:38:20.423002   56978 out.go:177] 
	W1207 21:38:20.424589   56978 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1207 21:38:20.424608   56978 out.go:239] * 
	* 
	W1207 21:38:20.426999   56978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 21:38:20.429349   56978 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-155321 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321
E1207 21:38:21.112794   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321: exit status 3 (18.438275127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:38:38.866304   59370 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host
	E1207 21:38:38.866330   59370 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-155321" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (139.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321: exit status 3 (3.231410749s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:38:42.098222   59876 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host
	E1207 21:38:42.098244   59876 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-155321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-155321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153069048s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-155321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321: exit status 3 (3.060271115s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 21:38:51.314311   60380 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host
	E1207 21:38:51.314336   60380 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.117:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-155321" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.45s)

                                                
                                    

Test pass (232/299)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 48.71
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 19.97
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.1/json-events 43.41
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.59
27 TestOffline 121.27
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 212.45
34 TestAddons/parallel/Registry 27.89
36 TestAddons/parallel/InspektorGadget 11.39
37 TestAddons/parallel/MetricsServer 6.12
38 TestAddons/parallel/HelmTiller 18.68
40 TestAddons/parallel/CSI 70.58
41 TestAddons/parallel/Headlamp 14.68
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 70.23
44 TestAddons/parallel/NvidiaDevicePlugin 5.65
47 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestCertOptions 75.25
50 TestCertExpiration 364.26
52 TestForceSystemdFlag 109.37
53 TestForceSystemdEnv 51.52
55 TestKVMDriverInstallOrUpdate 2.11
59 TestErrorSpam/setup 45.74
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.54
63 TestErrorSpam/unpause 1.66
64 TestErrorSpam/stop 2.25
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 61.68
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.91
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
76 TestFunctional/serial/CacheCmd/cache/add_local 2.21
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 36.7
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.6
87 TestFunctional/serial/LogsFileCmd 1.52
88 TestFunctional/serial/InvalidService 4.72
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 21.12
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 1.01
98 TestFunctional/parallel/ServiceCmdConnect 10.07
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 59.04
102 TestFunctional/parallel/SSHCmd 0.45
103 TestFunctional/parallel/CpCmd 1.02
104 TestFunctional/parallel/MySQL 30.34
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.56
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
114 TestFunctional/parallel/License 0.64
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.91
123 TestFunctional/parallel/ImageCommands/Setup 2.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 30.45
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.1
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 10.47
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.18
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.08
138 TestFunctional/parallel/ImageCommands/ImageRemove 1.02
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.96
140 TestFunctional/parallel/ServiceCmd/List 0.56
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
143 TestFunctional/parallel/ServiceCmd/Format 0.43
144 TestFunctional/parallel/ServiceCmd/URL 0.37
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.98
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
147 TestFunctional/parallel/ProfileCmd/profile_list 0.37
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
149 TestFunctional/parallel/MountCmd/any-port 8.6
150 TestFunctional/parallel/Version/short 0.06
151 TestFunctional/parallel/Version/components 0.52
152 TestFunctional/parallel/MountCmd/specific-port 1.7
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestIngressAddonLegacy/StartLegacyK8sCluster 124.22
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.5
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
167 TestJSONOutput/start/Command 66.31
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.67
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.64
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.11
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 99.25
199 TestMountStart/serial/StartWithMountFirst 31.39
200 TestMountStart/serial/VerifyMountFirst 0.4
201 TestMountStart/serial/StartWithMountSecond 27.91
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.67
204 TestMountStart/serial/VerifyMountPostDelete 0.41
205 TestMountStart/serial/Stop 1.17
206 TestMountStart/serial/RestartStopped 24.87
207 TestMountStart/serial/VerifyMountPostStop 0.4
210 TestMultiNode/serial/FreshStart2Nodes 122.08
211 TestMultiNode/serial/DeployApp2Nodes 5.6
213 TestMultiNode/serial/AddNode 43.99
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.21
216 TestMultiNode/serial/CopyFile 7.57
217 TestMultiNode/serial/StopNode 2.25
218 TestMultiNode/serial/StartAfterStop 33.75
220 TestMultiNode/serial/DeleteNode 1.56
222 TestMultiNode/serial/RestartMultiNode 447.21
223 TestMultiNode/serial/ValidateNameConflict 49.89
230 TestScheduledStopUnix 118.91
236 TestKubernetesUpgrade 242.16
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/StartWithK8s 134.37
241 TestNoKubernetes/serial/StartWithStopK8s 38.94
242 TestStoppedBinaryUpgrade/Setup 1.92
244 TestNoKubernetes/serial/Start 27.97
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
246 TestNoKubernetes/serial/ProfileList 1.03
247 TestNoKubernetes/serial/Stop 1.24
248 TestNoKubernetes/serial/StartNoArgs 27.2
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
258 TestPause/serial/Start 103.03
266 TestNetworkPlugins/group/false 3.6
271 TestStartStop/group/old-k8s-version/serial/FirstStart 166.7
273 TestStartStop/group/no-preload/serial/FirstStart 232.14
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.4
277 TestStartStop/group/embed-certs/serial/FirstStart 104.32
278 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
282 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.38
283 TestStartStop/group/embed-certs/serial/DeployApp 12.51
284 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.92
286 TestStartStop/group/no-preload/serial/DeployApp 11.89
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.43
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
293 TestStartStop/group/old-k8s-version/serial/SecondStart 803.14
295 TestStartStop/group/embed-certs/serial/SecondStart 565.45
298 TestStartStop/group/no-preload/serial/SecondStart 531.73
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 510.96
309 TestStartStop/group/newest-cni/serial/FirstStart 59.77
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.78
312 TestNetworkPlugins/group/auto/Start 66.74
314 TestNetworkPlugins/group/kindnet/Start 72.37
315 TestNetworkPlugins/group/auto/KubeletFlags 0.21
316 TestNetworkPlugins/group/auto/NetCatPod 12.38
317 TestNetworkPlugins/group/auto/DNS 0.21
318 TestNetworkPlugins/group/auto/Localhost 0.16
319 TestNetworkPlugins/group/auto/HairPin 0.2
320 TestNetworkPlugins/group/calico/Start 96.76
321 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
323 TestNetworkPlugins/group/kindnet/NetCatPod 13.45
324 TestNetworkPlugins/group/custom-flannel/Start 92.54
325 TestNetworkPlugins/group/kindnet/DNS 0.2
326 TestNetworkPlugins/group/kindnet/Localhost 0.18
328 TestNetworkPlugins/group/kindnet/HairPin 0.17
329 TestStartStop/group/newest-cni/serial/SecondStart 423.03
330 TestNetworkPlugins/group/enable-default-cni/Start 393.1
331 TestNetworkPlugins/group/calico/ControllerPod 5.03
332 TestNetworkPlugins/group/calico/KubeletFlags 0.22
333 TestNetworkPlugins/group/calico/NetCatPod 11.4
334 TestNetworkPlugins/group/calico/DNS 0.17
335 TestNetworkPlugins/group/calico/Localhost 0.15
336 TestNetworkPlugins/group/calico/HairPin 0.16
337 TestNetworkPlugins/group/flannel/Start 337.33
338 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
339 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
340 TestNetworkPlugins/group/custom-flannel/DNS 0.17
341 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
342 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
343 TestNetworkPlugins/group/bridge/Start 305.81
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.43
346 TestNetworkPlugins/group/flannel/ControllerPod 5.03
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
348 TestNetworkPlugins/group/bridge/NetCatPod 12.46
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
352 TestNetworkPlugins/group/flannel/KubeletFlags 0.51
353 TestNetworkPlugins/group/flannel/NetCatPod 12.41
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
357 TestNetworkPlugins/group/bridge/DNS 16.04
358 TestStartStop/group/newest-cni/serial/Pause 3.05
359 TestNetworkPlugins/group/flannel/DNS 0.21
360 TestNetworkPlugins/group/flannel/Localhost 0.17
361 TestNetworkPlugins/group/flannel/HairPin 0.16
362 TestNetworkPlugins/group/bridge/Localhost 0.24
363 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (48.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.710205108s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (48.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-619271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-619271: exit status 85 (74.073008ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-619271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:01:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:01:15.932632   16852 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:01:15.932748   16852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:15.932756   16852 out.go:309] Setting ErrFile to fd 2...
	I1207 20:01:15.932761   16852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:15.932952   16852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	W1207 20:01:15.933053   16852 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: no such file or directory
	I1207 20:01:15.933629   16852 out.go:303] Setting JSON to true
	I1207 20:01:15.934480   16852 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2622,"bootTime":1701976654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:01:15.934536   16852 start.go:138] virtualization: kvm guest
	I1207 20:01:15.936908   16852 out.go:97] [download-only-619271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:01:15.938521   16852 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:01:15.937008   16852 notify.go:220] Checking for updates...
	W1207 20:01:15.937042   16852 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 20:01:15.941497   16852 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:01:15.942964   16852 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:01:15.944452   16852 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:01:15.945917   16852 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 20:01:15.948677   16852 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:01:15.948874   16852 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:01:16.052722   16852 out.go:97] Using the kvm2 driver based on user configuration
	I1207 20:01:16.052755   16852 start.go:298] selected driver: kvm2
	I1207 20:01:16.052765   16852 start.go:902] validating driver "kvm2" against <nil>
	I1207 20:01:16.053219   16852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:01:16.053372   16852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:01:16.067497   16852 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:01:16.067595   16852 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:01:16.068280   16852 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1207 20:01:16.068481   16852 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 20:01:16.068541   16852 cni.go:84] Creating CNI manager for ""
	I1207 20:01:16.068562   16852 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:01:16.068574   16852 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 20:01:16.068584   16852 start_flags.go:323] config:
	{Name:download-only-619271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-619271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:01:16.068848   16852 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:01:16.070993   16852 out.go:97] Downloading VM boot image ...
	I1207 20:01:16.071047   16852 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/iso/amd64/minikube-v1.32.1-1701788780-17711-amd64.iso
	I1207 20:01:25.776163   16852 out.go:97] Starting control plane node download-only-619271 in cluster download-only-619271
	I1207 20:01:25.776191   16852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 20:01:25.887359   16852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1207 20:01:25.887383   16852 cache.go:56] Caching tarball of preloaded images
	I1207 20:01:25.887580   16852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 20:01:25.889883   16852 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1207 20:01:25.889915   16852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:01:26.004222   16852 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1207 20:01:38.076330   16852 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:01:38.076425   16852 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:01:38.976986   16852 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1207 20:01:38.977326   16852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/download-only-619271/config.json ...
	I1207 20:01:38.977354   16852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/download-only-619271/config.json: {Name:mkabe2cb861661b11c962720e5e6a9c2f66ac9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:01:38.977508   16852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1207 20:01:38.977668   16852 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-619271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (19.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (19.971117653s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (19.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-619271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-619271: exit status 85 (68.127098ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-619271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:02 UTC |          |
	|         | -p download-only-619271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:02:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:02:04.719818   16999 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:02:04.719961   16999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:02:04.719972   16999 out.go:309] Setting ErrFile to fd 2...
	I1207 20:02:04.719979   16999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:02:04.720188   16999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	W1207 20:02:04.720307   16999 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: no such file or directory
	I1207 20:02:04.720766   16999 out.go:303] Setting JSON to true
	I1207 20:02:04.721562   16999 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2671,"bootTime":1701976654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:02:04.721622   16999 start.go:138] virtualization: kvm guest
	I1207 20:02:04.724072   16999 out.go:97] [download-only-619271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:02:04.726030   16999 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:02:04.724278   16999 notify.go:220] Checking for updates...
	I1207 20:02:04.729865   16999 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:02:04.731666   16999 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:02:04.733155   16999 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:02:04.734518   16999 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 20:02:04.737788   16999 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:02:04.738436   16999 config.go:182] Loaded profile config "download-only-619271": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1207 20:02:04.738503   16999 start.go:810] api.Load failed for download-only-619271: filestore "download-only-619271": Docker machine "download-only-619271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:02:04.738616   16999 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 20:02:04.738661   16999 start.go:810] api.Load failed for download-only-619271: filestore "download-only-619271": Docker machine "download-only-619271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:02:04.770243   16999 out.go:97] Using the kvm2 driver based on existing profile
	I1207 20:02:04.770278   16999 start.go:298] selected driver: kvm2
	I1207 20:02:04.770283   16999 start.go:902] validating driver "kvm2" against &{Name:download-only-619271 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-619271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:02:04.770640   16999 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:02:04.770699   16999 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:02:04.784670   16999 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:02:04.785383   16999 cni.go:84] Creating CNI manager for ""
	I1207 20:02:04.785399   16999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:02:04.785412   16999 start_flags.go:323] config:
	{Name:download-only-619271 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-619271 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:02:04.785541   16999 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:02:04.787496   16999 out.go:97] Starting control plane node download-only-619271 in cluster download-only-619271
	I1207 20:02:04.787510   16999 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:02:05.291991   16999 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1207 20:02:05.292023   16999 cache.go:56] Caching tarball of preloaded images
	I1207 20:02:05.292194   16999 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1207 20:02:05.294334   16999 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1207 20:02:05.294358   16999 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:02:05.408282   16999 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-619271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (43.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-619271 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (43.405665904s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (43.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-619271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-619271: exit status 85 (71.973838ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-619271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:02 UTC |          |
	|         | -p download-only-619271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-619271 | jenkins | v1.32.0 | 07 Dec 23 20:02 UTC |          |
	|         | -p download-only-619271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:02:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:02:24.759981   17077 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:02:24.760084   17077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:02:24.760093   17077 out.go:309] Setting ErrFile to fd 2...
	I1207 20:02:24.760101   17077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:02:24.760254   17077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	W1207 20:02:24.760361   17077 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-9628/.minikube/config/config.json: no such file or directory
	I1207 20:02:24.760767   17077 out.go:303] Setting JSON to true
	I1207 20:02:24.761513   17077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2691,"bootTime":1701976654,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:02:24.761570   17077 start.go:138] virtualization: kvm guest
	I1207 20:02:24.763873   17077 out.go:97] [download-only-619271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:02:24.765460   17077 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:02:24.764059   17077 notify.go:220] Checking for updates...
	I1207 20:02:24.768210   17077 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:02:24.769566   17077 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:02:24.770945   17077 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:02:24.772367   17077 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 20:02:24.774741   17077 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:02:24.775193   17077 config.go:182] Loaded profile config "download-only-619271": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1207 20:02:24.775244   17077 start.go:810] api.Load failed for download-only-619271: filestore "download-only-619271": Docker machine "download-only-619271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:02:24.775321   17077 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 20:02:24.775363   17077 start.go:810] api.Load failed for download-only-619271: filestore "download-only-619271": Docker machine "download-only-619271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:02:24.804708   17077 out.go:97] Using the kvm2 driver based on existing profile
	I1207 20:02:24.804730   17077 start.go:298] selected driver: kvm2
	I1207 20:02:24.804734   17077 start.go:902] validating driver "kvm2" against &{Name:download-only-619271 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-619271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:02:24.805167   17077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:02:24.805248   17077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17719-9628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1207 20:02:24.818143   17077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1207 20:02:24.818848   17077 cni.go:84] Creating CNI manager for ""
	I1207 20:02:24.818865   17077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1207 20:02:24.818879   17077 start_flags.go:323] config:
	{Name:download-only-619271 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-619271 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:02:24.819022   17077 iso.go:125] acquiring lock: {Name:mkbde25ef77d027ed8e13798ae1850647f73fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:02:24.820662   17077 out.go:97] Starting control plane node download-only-619271 in cluster download-only-619271
	I1207 20:02:24.820675   17077 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 20:02:25.345029   17077 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1207 20:02:25.345069   17077 cache.go:56] Caching tarball of preloaded images
	I1207 20:02:25.345219   17077 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 20:02:25.347155   17077 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1207 20:02:25.347171   17077 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:02:25.465340   17077 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1207 20:02:39.611682   17077 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:02:39.611765   17077 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-9628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1207 20:02:40.435772   17077 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1207 20:02:40.435905   17077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/download-only-619271/config.json ...
	I1207 20:02:40.436130   17077 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1207 20:02:40.436347   17077 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17719-9628/.minikube/cache/linux/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-619271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-619271
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-542325 --alsologtostderr --binary-mirror http://127.0.0.1:39835 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-542325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-542325
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (121.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-767664 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-767664 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m0.237805027s)
helpers_test.go:175: Cleaning up "offline-crio-767664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-767664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-767664: (1.034054991s)
--- PASS: TestOffline (121.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-757601
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-757601: exit status 85 (62.132696ms)

                                                
                                                
-- stdout --
	* Profile "addons-757601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-757601
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-757601: exit status 85 (63.193856ms)

                                                
                                                
-- stdout --
	* Profile "addons-757601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-757601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-757601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-757601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.452951027s)
--- PASS: TestAddons/Setup (212.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (27.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 48.751636ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-s82w5" [e0e2ee17-ea9d-4ffa-b6db-8b3ed128a0a1] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016770493s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k2cft" [4fb405a9-9156-489a-82b8-dc52261e2365] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017466706s
addons_test.go:339: (dbg) Run:  kubectl --context addons-757601 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-757601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-757601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.955988474s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 ip
2023/12/07 20:07:08 [DEBUG] GET http://192.168.39.93:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (27.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jrfzg" [cd55bd61-6e54-4ea5-a5cc-0e0ae781c1c8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015581025s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-757601
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-757601: (6.371718485s)
--- PASS: TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 48.769658ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8m6ck" [d9124562-9981-4a1d-9b4c-3e26b6ebe070] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.02373203s
addons_test.go:414: (dbg) Run:  kubectl --context addons-757601 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 48.583984ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9q4z5" [ab057003-3d2f-4282-a60e-3ed01033c5e4] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.019788188s
addons_test.go:472: (dbg) Run:  kubectl --context addons-757601 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-757601 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.69943095s)
addons_test.go:477: kubectl --context addons-757601 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:472: (dbg) Run:  kubectl --context addons-757601 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-757601 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.270311667s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 addons disable helm-tiller --alsologtostderr -v=1: (1.371811889s)
--- PASS: TestAddons/parallel/HelmTiller (18.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 49.426832ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-757601 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-757601 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4440b9a1-a229-447d-83aa-bb0289bdcd4d] Pending
helpers_test.go:344: "task-pv-pod" [4440b9a1-a229-447d-83aa-bb0289bdcd4d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4440b9a1-a229-447d-83aa-bb0289bdcd4d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.019992954s
addons_test.go:583: (dbg) Run:  kubectl --context addons-757601 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-757601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-757601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-757601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-757601 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-757601 delete pod task-pv-pod: (1.151832001s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-757601 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-757601 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-757601 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6f923ee7-6164-4d7d-8082-a560a56bc8ad] Pending
helpers_test.go:344: "task-pv-pod-restore" [6f923ee7-6164-4d7d-8082-a560a56bc8ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6f923ee7-6164-4d7d-8082-a560a56bc8ad] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.016294615s
addons_test.go:625: (dbg) Run:  kubectl --context addons-757601 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-757601 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-757601 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.897353353s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-757601 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-757601 --alsologtostderr -v=1: (1.651406046s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-wtvt2" [59547b02-2a38-42bc-8e3a-336953be35f5] Pending
helpers_test.go:344: "headlamp-777fd4b855-wtvt2" [59547b02-2a38-42bc-8e3a-336953be35f5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-wtvt2" [59547b02-2a38-42bc-8e3a-336953be35f5] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-wtvt2" [59547b02-2a38-42bc-8e3a-336953be35f5] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.028369413s
--- PASS: TestAddons/parallel/Headlamp (14.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-7g47b" [79f00e48-54d4-4af7-8149-d06c561e8bd7] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010419301s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-757601
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (70.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-757601 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-757601 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [33d2d773-ba40-4b4b-92c1-f0be56308b19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [33d2d773-ba40-4b4b-92c1-f0be56308b19] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [33d2d773-ba40-4b4b-92c1-f0be56308b19] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 15.017162719s
addons_test.go:890: (dbg) Run:  kubectl --context addons-757601 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 ssh "cat /opt/local-path-provisioner/pvc-109e20d2-16b7-43c6-9128-df817164d27d_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-757601 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-757601 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-757601 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-757601 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.71078834s)
--- PASS: TestAddons/parallel/LocalPath (70.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5m6r5" [d2c991f1-e7f9-47bd-b82e-f542c0dd79cd] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.016164589s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-757601
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-757601 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-757601 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (75.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-620116 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-620116 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.975876874s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-620116 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-620116 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-620116 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-620116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-620116
--- PASS: TestCertOptions (75.25s)

                                                
                                    
x
+
TestCertExpiration (364.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-814417 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-814417 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.889342231s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-814417 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-814417 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m45.770162408s)
helpers_test.go:175: Cleaning up "cert-expiration-814417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-814417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-814417: (1.602953695s)
--- PASS: TestCertExpiration (364.26s)

                                                
                                    
x
+
TestForceSystemdFlag (109.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-829616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-829616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m48.068910779s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-829616 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-829616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-829616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-829616: (1.066852233s)
--- PASS: TestForceSystemdFlag (109.37s)

                                                
                                    
x
+
TestForceSystemdEnv (51.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-832774 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-832774 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.508493315s)
helpers_test.go:175: Cleaning up "force-systemd-env-832774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-832774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-832774: (1.007214998s)
--- PASS: TestForceSystemdEnv (51.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.11s)

                                                
                                    
x
+
TestErrorSpam/setup (45.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-292793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-292793 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-292793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-292793 --driver=kvm2  --container-runtime=crio: (45.743097271s)
--- PASS: TestErrorSpam/setup (45.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 stop: (2.091469307s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-292793 --log_dir /tmp/nospam-292793 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17719-9628/.minikube/files/etc/test/nested/copy/16840/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-785124 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.679046582s)
--- PASS: TestFunctional/serial/StartWithProxy (61.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-785124 --alsologtostderr -v=8: (34.908652391s)
functional_test.go:659: soft start took 34.909291672s for "functional-785124" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-785124 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:3.1: (1.085671484s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:3.3: (1.128818745s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 cache add registry.k8s.io/pause:latest: (1.006083771s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-785124 /tmp/TestFunctionalserialCacheCmdcacheadd_local2555894502/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache add minikube-local-cache-test:functional-785124
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 cache add minikube-local-cache-test:functional-785124: (1.877632443s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache delete minikube-local-cache-test:functional-785124
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-785124
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (231.176874ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 kubectl -- --context functional-785124 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-785124 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-785124 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.700002784s)
functional_test.go:757: restart took 36.700127449s for "functional-785124" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-785124 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 logs: (1.598693405s)
--- PASS: TestFunctional/serial/LogsCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 logs --file /tmp/TestFunctionalserialLogsFileCmd3548078839/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 logs --file /tmp/TestFunctionalserialLogsFileCmd3548078839/001/logs.txt: (1.516944184s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-785124 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-785124
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-785124: exit status 115 (298.159132ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.231:31263 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-785124 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-785124 delete -f testdata/invalidsvc.yaml: (1.132204837s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 config get cpus: exit status 14 (69.857558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 config get cpus: exit status 14 (70.895986ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-785124 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-785124 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24615: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-785124 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.005115ms)

                                                
                                                
-- stdout --
	* [functional-785124] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:16:41.016110   24372 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:16:41.016226   24372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:16:41.016235   24372 out.go:309] Setting ErrFile to fd 2...
	I1207 20:16:41.016240   24372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:16:41.016445   24372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:16:41.017018   24372 out.go:303] Setting JSON to false
	I1207 20:16:41.018026   24372 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3547,"bootTime":1701976654,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:16:41.018084   24372 start.go:138] virtualization: kvm guest
	I1207 20:16:41.020730   24372 out.go:177] * [functional-785124] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 20:16:41.022629   24372 notify.go:220] Checking for updates...
	I1207 20:16:41.024383   24372 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:16:41.026437   24372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:16:41.028004   24372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:16:41.029512   24372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:16:41.030989   24372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:16:41.032369   24372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:16:41.034028   24372 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:16:41.034434   24372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:16:41.034477   24372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:16:41.048725   24372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I1207 20:16:41.049118   24372 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:16:41.049771   24372 main.go:141] libmachine: Using API Version  1
	I1207 20:16:41.049799   24372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:16:41.050173   24372 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:16:41.050336   24372 main.go:141] libmachine: (functional-785124) Calling .DriverName
	I1207 20:16:41.050591   24372 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:16:41.050981   24372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:16:41.051025   24372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:16:41.064700   24372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I1207 20:16:41.065024   24372 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:16:41.065465   24372 main.go:141] libmachine: Using API Version  1
	I1207 20:16:41.065489   24372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:16:41.065777   24372 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:16:41.065984   24372 main.go:141] libmachine: (functional-785124) Calling .DriverName
	I1207 20:16:41.098336   24372 out.go:177] * Using the kvm2 driver based on existing profile
	I1207 20:16:41.099800   24372 start.go:298] selected driver: kvm2
	I1207 20:16:41.099815   24372 start.go:902] validating driver "kvm2" against &{Name:functional-785124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-785124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.231 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:16:41.099934   24372 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:16:41.102129   24372 out.go:177] 
	W1207 20:16:41.103525   24372 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 20:16:41.105136   24372 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785124 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-785124 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.854096ms)

                                                
                                                
-- stdout --
	* [functional-785124] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:16:41.298671   24427 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:16:41.298799   24427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:16:41.298807   24427 out.go:309] Setting ErrFile to fd 2...
	I1207 20:16:41.298812   24427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:16:41.299071   24427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:16:41.299639   24427 out.go:303] Setting JSON to false
	I1207 20:16:41.300510   24427 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3547,"bootTime":1701976654,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 20:16:41.300561   24427 start.go:138] virtualization: kvm guest
	I1207 20:16:41.302845   24427 out.go:177] * [functional-785124] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1207 20:16:41.304655   24427 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:16:41.304713   24427 notify.go:220] Checking for updates...
	I1207 20:16:41.307184   24427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:16:41.308638   24427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 20:16:41.310090   24427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 20:16:41.311347   24427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 20:16:41.312613   24427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:16:41.314241   24427 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:16:41.314735   24427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:16:41.314778   24427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:16:41.329947   24427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I1207 20:16:41.330273   24427 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:16:41.330838   24427 main.go:141] libmachine: Using API Version  1
	I1207 20:16:41.330864   24427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:16:41.331142   24427 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:16:41.331272   24427 main.go:141] libmachine: (functional-785124) Calling .DriverName
	I1207 20:16:41.331487   24427 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:16:41.331805   24427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:16:41.331840   24427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:16:41.345594   24427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I1207 20:16:41.345952   24427 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:16:41.346351   24427 main.go:141] libmachine: Using API Version  1
	I1207 20:16:41.346369   24427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:16:41.346660   24427 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:16:41.346829   24427 main.go:141] libmachine: (functional-785124) Calling .DriverName
	I1207 20:16:41.378736   24427 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1207 20:16:41.380088   24427 start.go:298] selected driver: kvm2
	I1207 20:16:41.380100   24427 start.go:902] validating driver "kvm2" against &{Name:functional-785124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-785124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.231 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:16:41.380227   24427 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:16:41.382315   24427 out.go:177] 
	W1207 20:16:41.383635   24427 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 20:16:41.384992   24427 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-785124 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-785124 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cg68b" [3a49683a-2752-4b9a-a2bc-9b65e780fa22] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cg68b" [3a49683a-2752-4b9a-a2bc-9b65e780fa22] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.012441851s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.231:32208
functional_test.go:1674: http://192.168.50.231:32208: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cg68b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.231:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.231:32208
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0d937af0-212d-4674-b0aa-c1699f011232] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016469853s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-785124 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-785124 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-785124 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-785124 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3cbda00c-8f75-4c16-9e37-f474aaa9811a] Pending
helpers_test.go:344: "sp-pod" [3cbda00c-8f75-4c16-9e37-f474aaa9811a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3cbda00c-8f75-4c16-9e37-f474aaa9811a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.052140869s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-785124 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-785124 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-785124 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [684d0683-c213-48a4-9ad3-298b733f3f09] Pending
helpers_test.go:344: "sp-pod" [684d0683-c213-48a4-9ad3-298b733f3f09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [684d0683-c213-48a4-9ad3-298b733f3f09] Running
E1207 20:17:02.182534   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
2023/12/07 20:17:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.031199324s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-785124 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh -n functional-785124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 cp functional-785124:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2769812051/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh -n functional-785124 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-785124 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mkmkh" [0a760bcc-be49-42ea-ade4-c0a2d91a05ae] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mkmkh" [0a760bcc-be49-42ea-ade4-c0a2d91a05ae] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.045092323s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;": exit status 1 (259.82997ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;": exit status 1 (204.060449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;": exit status 1 (437.364686ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785124 exec mysql-859648c796-mkmkh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16840/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /etc/test/nested/copy/16840/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16840.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /etc/ssl/certs/16840.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16840.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /usr/share/ca-certificates/16840.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/168402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /etc/ssl/certs/168402.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/168402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /usr/share/ca-certificates/168402.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-785124 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "sudo systemctl is-active docker": exit status 1 (244.691016ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "sudo systemctl is-active containerd": exit status 1 (246.71835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785124 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-785124
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-785124
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785124 image ls --format short --alsologtostderr:
I1207 20:16:46.581712   24776 out.go:296] Setting OutFile to fd 1 ...
I1207 20:16:46.581832   24776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:46.581841   24776 out.go:309] Setting ErrFile to fd 2...
I1207 20:16:46.581846   24776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:46.582071   24776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
I1207 20:16:46.582635   24776 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:46.582731   24776 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:46.583099   24776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:46.583147   24776 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:46.597370   24776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
I1207 20:16:46.597867   24776 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:46.598475   24776 main.go:141] libmachine: Using API Version  1
I1207 20:16:46.598509   24776 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:46.598839   24776 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:46.599018   24776 main.go:141] libmachine: (functional-785124) Calling .GetState
I1207 20:16:46.601124   24776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:46.601193   24776 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:46.615307   24776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
I1207 20:16:46.615784   24776 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:46.616271   24776 main.go:141] libmachine: Using API Version  1
I1207 20:16:46.616297   24776 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:46.616615   24776 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:46.616820   24776 main.go:141] libmachine: (functional-785124) Calling .DriverName
I1207 20:16:46.617043   24776 ssh_runner.go:195] Run: systemctl --version
I1207 20:16:46.617066   24776 main.go:141] libmachine: (functional-785124) Calling .GetSSHHostname
I1207 20:16:46.620182   24776 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:46.620564   24776 main.go:141] libmachine: (functional-785124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f5:bb", ip: ""} in network mk-functional-785124: {Iface:virbr1 ExpiryTime:2023-12-07 21:13:51 +0000 UTC Type:0 Mac:52:54:00:37:f5:bb Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:functional-785124 Clientid:01:52:54:00:37:f5:bb}
I1207 20:16:46.620592   24776 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined IP address 192.168.50.231 and MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:46.620748   24776 main.go:141] libmachine: (functional-785124) Calling .GetSSHPort
I1207 20:16:46.620919   24776 main.go:141] libmachine: (functional-785124) Calling .GetSSHKeyPath
I1207 20:16:46.621110   24776 main.go:141] libmachine: (functional-785124) Calling .GetSSHUsername
I1207 20:16:46.621261   24776 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/functional-785124/id_rsa Username:docker}
I1207 20:16:46.754405   24776 ssh_runner.go:195] Run: sudo crictl images --output json
I1207 20:16:46.808357   24776 main.go:141] libmachine: Making call to close driver server
I1207 20:16:46.808369   24776 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:46.808614   24776 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:46.808641   24776 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:46.808650   24776 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:46.808657   24776 main.go:141] libmachine: Making call to close driver server
I1207 20:16:46.808669   24776 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:46.808955   24776 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:46.808961   24776 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:46.808996   24776 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785124 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/google-containers/addon-resizer  | functional-785124  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-785124  | 81ecdc9312479 | 3.35kB |
| localhost/my-image                      | functional-785124  | 941fe53842e4b | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785124 image ls --format table --alsologtostderr:
I1207 20:16:52.274013   25344 out.go:296] Setting OutFile to fd 1 ...
I1207 20:16:52.274129   25344 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:52.274138   25344 out.go:309] Setting ErrFile to fd 2...
I1207 20:16:52.274142   25344 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:52.274322   25344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
I1207 20:16:52.274879   25344 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:52.274978   25344 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:52.275330   25344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:52.275370   25344 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:52.289529   25344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
I1207 20:16:52.289915   25344 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:52.290461   25344 main.go:141] libmachine: Using API Version  1
I1207 20:16:52.290484   25344 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:52.290820   25344 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:52.291023   25344 main.go:141] libmachine: (functional-785124) Calling .GetState
I1207 20:16:52.292653   25344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:52.292697   25344 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:52.306642   25344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
I1207 20:16:52.306997   25344 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:52.307459   25344 main.go:141] libmachine: Using API Version  1
I1207 20:16:52.307487   25344 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:52.307845   25344 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:52.308003   25344 main.go:141] libmachine: (functional-785124) Calling .DriverName
I1207 20:16:52.308210   25344 ssh_runner.go:195] Run: systemctl --version
I1207 20:16:52.308229   25344 main.go:141] libmachine: (functional-785124) Calling .GetSSHHostname
I1207 20:16:52.311127   25344 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:52.311447   25344 main.go:141] libmachine: (functional-785124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f5:bb", ip: ""} in network mk-functional-785124: {Iface:virbr1 ExpiryTime:2023-12-07 21:13:51 +0000 UTC Type:0 Mac:52:54:00:37:f5:bb Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:functional-785124 Clientid:01:52:54:00:37:f5:bb}
I1207 20:16:52.311476   25344 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined IP address 192.168.50.231 and MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:52.311629   25344 main.go:141] libmachine: (functional-785124) Calling .GetSSHPort
I1207 20:16:52.311772   25344 main.go:141] libmachine: (functional-785124) Calling .GetSSHKeyPath
I1207 20:16:52.311893   25344 main.go:141] libmachine: (functional-785124) Calling .GetSSHUsername
I1207 20:16:52.312064   25344 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/functional-785124/id_rsa Username:docker}
I1207 20:16:52.405488   25344 ssh_runner.go:195] Run: sudo crictl images --output json
I1207 20:16:52.456490   25344 main.go:141] libmachine: Making call to close driver server
I1207 20:16:52.456513   25344 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:52.457348   25344 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:52.457375   25344 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:52.457389   25344 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:52.457409   25344 main.go:141] libmachine: Making call to close driver server
I1207 20:16:52.457421   25344 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:52.457698   25344 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:52.457701   25344 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:52.457719   25344 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785124 image ls --format json --alsologtostderr:
[{"id":"30a2a96acd3a89de62546ff6d510ee668f3f898c9c7ec4e678c89d5818584962","repoDigests":["docker.io/library/665c7b01eefc5f6e8a87afe51bab18cd1da91f3db3cf15f68e0b461b9e6144c3-tmp@sha256:1c5d599517755fce58c37cc120ba8da84b3932b4173221b6c0a6ac2083655783"],"repoTags":[],"size":"1466018"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-785124"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900
bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigest
s":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357
513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"81ecdc931247932d95082c9dacba46e51633ebb36ad076f92731d69452f52011","repoDigests":["localhost/minikube-local-cache-test@sha256:2fadad8467a413ee4f8c76ba42de25afbf7c2822a02ed3b9cd44da62a571c13e"],"repoTags":["localhost/minikube-local-cache-test:functional-785124"],"size":"3345"},{"id":"941fe53842e4b1bb905e9abe13d903f979c54ed5a00d2049f26446def7222945","repoDigests":["localhost/my-image@sha256:21b4996f620908c9654b2349f21e10c8291c2e218261e8d3e3124c575d350792"],"repoTags":["localhost/my-image:functional-785124"],"size":"1468600"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["re
gistry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c4
9a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec
1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/lib
rary/nginx:latest"],"size":"190960382"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785124 image ls --format json --alsologtostderr:
I1207 20:16:52.042494   25290 out.go:296] Setting OutFile to fd 1 ...
I1207 20:16:52.042780   25290 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:52.042790   25290 out.go:309] Setting ErrFile to fd 2...
I1207 20:16:52.042795   25290 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:52.042983   25290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
I1207 20:16:52.043544   25290 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:52.043642   25290 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:52.043991   25290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:52.044038   25290 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:52.056866   25290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
I1207 20:16:52.057280   25290 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:52.058021   25290 main.go:141] libmachine: Using API Version  1
I1207 20:16:52.058047   25290 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:52.058336   25290 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:52.058517   25290 main.go:141] libmachine: (functional-785124) Calling .GetState
I1207 20:16:52.060692   25290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:52.060736   25290 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:52.073666   25290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
I1207 20:16:52.074045   25290 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:52.074496   25290 main.go:141] libmachine: Using API Version  1
I1207 20:16:52.074519   25290 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:52.074832   25290 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:52.074971   25290 main.go:141] libmachine: (functional-785124) Calling .DriverName
I1207 20:16:52.075161   25290 ssh_runner.go:195] Run: systemctl --version
I1207 20:16:52.075207   25290 main.go:141] libmachine: (functional-785124) Calling .GetSSHHostname
I1207 20:16:52.078003   25290 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:52.078404   25290 main.go:141] libmachine: (functional-785124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f5:bb", ip: ""} in network mk-functional-785124: {Iface:virbr1 ExpiryTime:2023-12-07 21:13:51 +0000 UTC Type:0 Mac:52:54:00:37:f5:bb Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:functional-785124 Clientid:01:52:54:00:37:f5:bb}
I1207 20:16:52.078440   25290 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined IP address 192.168.50.231 and MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:52.078576   25290 main.go:141] libmachine: (functional-785124) Calling .GetSSHPort
I1207 20:16:52.078735   25290 main.go:141] libmachine: (functional-785124) Calling .GetSSHKeyPath
I1207 20:16:52.078863   25290 main.go:141] libmachine: (functional-785124) Calling .GetSSHUsername
I1207 20:16:52.078996   25290 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/functional-785124/id_rsa Username:docker}
I1207 20:16:52.164774   25290 ssh_runner.go:195] Run: sudo crictl images --output json
I1207 20:16:52.212443   25290 main.go:141] libmachine: Making call to close driver server
I1207 20:16:52.212453   25290 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:52.212728   25290 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:52.212773   25290 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:52.212785   25290 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:52.212799   25290 main.go:141] libmachine: Making call to close driver server
I1207 20:16:52.212811   25290 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:52.212996   25290 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:52.213010   25290 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:52.213044   25290 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls --format yaml --alsologtostderr
E1207 20:16:46.820852   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785124 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 81ecdc931247932d95082c9dacba46e51633ebb36ad076f92731d69452f52011
repoDigests:
- localhost/minikube-local-cache-test@sha256:2fadad8467a413ee4f8c76ba42de25afbf7c2822a02ed3b9cd44da62a571c13e
repoTags:
- localhost/minikube-local-cache-test:functional-785124
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-785124
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785124 image ls --format yaml --alsologtostderr:
I1207 20:16:46.867627   24799 out.go:296] Setting OutFile to fd 1 ...
I1207 20:16:46.867901   24799 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:46.867911   24799 out.go:309] Setting ErrFile to fd 2...
I1207 20:16:46.867916   24799 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:46.868074   24799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
I1207 20:16:46.868647   24799 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:46.868744   24799 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:46.869105   24799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:46.869155   24799 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:46.883763   24799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44931
I1207 20:16:46.884216   24799 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:46.884788   24799 main.go:141] libmachine: Using API Version  1
I1207 20:16:46.884816   24799 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:46.885120   24799 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:46.885314   24799 main.go:141] libmachine: (functional-785124) Calling .GetState
I1207 20:16:46.887175   24799 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:46.887224   24799 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:46.900879   24799 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36357
I1207 20:16:46.901249   24799 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:46.901723   24799 main.go:141] libmachine: Using API Version  1
I1207 20:16:46.901744   24799 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:46.902093   24799 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:46.902270   24799 main.go:141] libmachine: (functional-785124) Calling .DriverName
I1207 20:16:46.902480   24799 ssh_runner.go:195] Run: systemctl --version
I1207 20:16:46.902502   24799 main.go:141] libmachine: (functional-785124) Calling .GetSSHHostname
I1207 20:16:46.904690   24799 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:46.905063   24799 main.go:141] libmachine: (functional-785124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f5:bb", ip: ""} in network mk-functional-785124: {Iface:virbr1 ExpiryTime:2023-12-07 21:13:51 +0000 UTC Type:0 Mac:52:54:00:37:f5:bb Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:functional-785124 Clientid:01:52:54:00:37:f5:bb}
I1207 20:16:46.905098   24799 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined IP address 192.168.50.231 and MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:46.905196   24799 main.go:141] libmachine: (functional-785124) Calling .GetSSHPort
I1207 20:16:46.905347   24799 main.go:141] libmachine: (functional-785124) Calling .GetSSHKeyPath
I1207 20:16:46.905468   24799 main.go:141] libmachine: (functional-785124) Calling .GetSSHUsername
I1207 20:16:46.905598   24799 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/functional-785124/id_rsa Username:docker}
I1207 20:16:47.012071   24799 ssh_runner.go:195] Run: sudo crictl images --output json
I1207 20:16:47.069749   24799 main.go:141] libmachine: Making call to close driver server
I1207 20:16:47.069768   24799 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:47.070010   24799 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:47.070021   24799 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:47.070031   24799 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:47.070041   24799 main.go:141] libmachine: Making call to close driver server
I1207 20:16:47.070050   24799 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:47.070257   24799 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:47.070282   24799 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:47.070327   24799 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh pgrep buildkitd: exit status 1 (206.360539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image build -t localhost/my-image:functional-785124 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image build -t localhost/my-image:functional-785124 testdata/build --alsologtostderr: (4.427026857s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785124 image build -t localhost/my-image:functional-785124 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 30a2a96acd3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-785124
--> 941fe53842e
Successfully tagged localhost/my-image:functional-785124
941fe53842e4b1bb905e9abe13d903f979c54ed5a00d2049f26446def7222945
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785124 image build -t localhost/my-image:functional-785124 testdata/build --alsologtostderr:
I1207 20:16:47.334923   24853 out.go:296] Setting OutFile to fd 1 ...
I1207 20:16:47.335096   24853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:47.335106   24853 out.go:309] Setting ErrFile to fd 2...
I1207 20:16:47.335111   24853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:16:47.335309   24853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
I1207 20:16:47.335872   24853 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:47.336408   24853 config.go:182] Loaded profile config "functional-785124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1207 20:16:47.336782   24853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:47.336835   24853 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:47.351100   24853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
I1207 20:16:47.351544   24853 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:47.352004   24853 main.go:141] libmachine: Using API Version  1
I1207 20:16:47.352028   24853 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:47.352374   24853 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:47.352554   24853 main.go:141] libmachine: (functional-785124) Calling .GetState
I1207 20:16:47.354209   24853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1207 20:16:47.354243   24853 main.go:141] libmachine: Launching plugin server for driver kvm2
I1207 20:16:47.368128   24853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
I1207 20:16:47.368496   24853 main.go:141] libmachine: () Calling .GetVersion
I1207 20:16:47.368830   24853 main.go:141] libmachine: Using API Version  1
I1207 20:16:47.368847   24853 main.go:141] libmachine: () Calling .SetConfigRaw
I1207 20:16:47.369099   24853 main.go:141] libmachine: () Calling .GetMachineName
I1207 20:16:47.369272   24853 main.go:141] libmachine: (functional-785124) Calling .DriverName
I1207 20:16:47.369461   24853 ssh_runner.go:195] Run: systemctl --version
I1207 20:16:47.369491   24853 main.go:141] libmachine: (functional-785124) Calling .GetSSHHostname
I1207 20:16:47.371998   24853 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:47.372357   24853 main.go:141] libmachine: (functional-785124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:f5:bb", ip: ""} in network mk-functional-785124: {Iface:virbr1 ExpiryTime:2023-12-07 21:13:51 +0000 UTC Type:0 Mac:52:54:00:37:f5:bb Iaid: IPaddr:192.168.50.231 Prefix:24 Hostname:functional-785124 Clientid:01:52:54:00:37:f5:bb}
I1207 20:16:47.372380   24853 main.go:141] libmachine: (functional-785124) DBG | domain functional-785124 has defined IP address 192.168.50.231 and MAC address 52:54:00:37:f5:bb in network mk-functional-785124
I1207 20:16:47.372535   24853 main.go:141] libmachine: (functional-785124) Calling .GetSSHPort
I1207 20:16:47.372690   24853 main.go:141] libmachine: (functional-785124) Calling .GetSSHKeyPath
I1207 20:16:47.372832   24853 main.go:141] libmachine: (functional-785124) Calling .GetSSHUsername
I1207 20:16:47.372965   24853 sshutil.go:53] new ssh client: &{IP:192.168.50.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/functional-785124/id_rsa Username:docker}
I1207 20:16:47.459706   24853 build_images.go:151] Building image from path: /tmp/build.358528736.tar
I1207 20:16:47.459780   24853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 20:16:47.468416   24853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.358528736.tar
I1207 20:16:47.472524   24853 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.358528736.tar: stat -c "%s %y" /var/lib/minikube/build/build.358528736.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.358528736.tar': No such file or directory
I1207 20:16:47.472558   24853 ssh_runner.go:362] scp /tmp/build.358528736.tar --> /var/lib/minikube/build/build.358528736.tar (3072 bytes)
I1207 20:16:47.505684   24853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.358528736
I1207 20:16:47.513960   24853 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.358528736 -xf /var/lib/minikube/build/build.358528736.tar
I1207 20:16:47.522205   24853 crio.go:297] Building image: /var/lib/minikube/build/build.358528736
I1207 20:16:47.522250   24853 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-785124 /var/lib/minikube/build/build.358528736 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1207 20:16:51.681242   24853 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-785124 /var/lib/minikube/build/build.358528736 --cgroup-manager=cgroupfs: (4.158974516s)
I1207 20:16:51.681292   24853 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.358528736
I1207 20:16:51.694262   24853 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.358528736.tar
I1207 20:16:51.703613   24853 build_images.go:207] Built localhost/my-image:functional-785124 from /tmp/build.358528736.tar
I1207 20:16:51.703636   24853 build_images.go:123] succeeded building to: functional-785124
I1207 20:16:51.703641   24853 build_images.go:124] failed building to: 
I1207 20:16:51.703661   24853 main.go:141] libmachine: Making call to close driver server
I1207 20:16:51.703677   24853 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:51.703976   24853 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
I1207 20:16:51.703976   24853 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:51.704004   24853 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:51.704021   24853 main.go:141] libmachine: Making call to close driver server
I1207 20:16:51.704034   24853 main.go:141] libmachine: (functional-785124) Calling .Close
I1207 20:16:51.704237   24853 main.go:141] libmachine: Successfully made call to close driver server
I1207 20:16:51.704261   24853 main.go:141] libmachine: Making call to close connection to plugin binary
I1207 20:16:51.704260   24853 main.go:141] libmachine: (functional-785124) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.093756973s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-785124
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (30.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-785124 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-785124 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-htrkd" [d840c2d2-28b3-4a1d-8a8a-33181bdea479] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-htrkd" [d840c2d2-28b3-4a1d-8a8a-33181bdea479] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 30.262448753s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (30.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr: (4.542699729s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr: (10.199118748s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.074636468s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-785124
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image load --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr: (4.805648987s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image save gcr.io/google-containers/addon-resizer:functional-785124 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image save gcr.io/google-containers/addon-resizer:functional-785124 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.079254486s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image rm gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.610208792s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service list -o json
functional_test.go:1493: Took "522.191852ms" to run "out/minikube-linux-amd64 -p functional-785124 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.231:31290
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.231:31290
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-785124
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 image save --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-785124 image save --daemon gcr.io/google-containers/addon-resizer:functional-785124 --alsologtostderr: (1.943002685s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-785124
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "291.074011ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "77.407041ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "251.031736ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "59.262956ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdany-port757774139/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701980200744268555" to /tmp/TestFunctionalparallelMountCmdany-port757774139/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701980200744268555" to /tmp/TestFunctionalparallelMountCmdany-port757774139/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701980200744268555" to /tmp/TestFunctionalparallelMountCmdany-port757774139/001/test-1701980200744268555
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.053282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh -- ls -la /mount-9p
E1207 20:16:41.700176   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:41.706036   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:41.716284   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:41.736551   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:41.776861   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:41.857755   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 20:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 20:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 20:16 test-1701980200744268555
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh cat /mount-9p/test-1701980200744268555
E1207 20:16:42.018347   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-785124 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e931c93f-4b86-4b9e-aa25-86a4e3cad46a] Pending
E1207 20:16:42.338604   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:16:42.979558   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [e931c93f-4b86-4b9e-aa25-86a4e3cad46a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1207 20:16:44.260500   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [e931c93f-4b86-4b9e-aa25-86a4e3cad46a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e931c93f-4b86-4b9e-aa25-86a4e3cad46a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.024058063s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-785124 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdany-port757774139/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdspecific-port2627841582/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.962452ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdspecific-port2627841582/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "sudo umount -f /mount-9p": exit status 1 (243.79422ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-785124 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdspecific-port2627841582/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T" /mount1: exit status 1 (335.097514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T" /mount1
E1207 20:16:51.941725   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785124 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-785124 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785124 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1030454781/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-785124
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-785124
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-785124
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (124.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-393627 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1207 20:17:22.663256   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:18:03.623796   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-393627 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m4.219737503s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (124.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons enable ingress --alsologtostderr -v=5
E1207 20:19:25.544835   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons enable ingress --alsologtostderr -v=5: (16.504413411s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-393627 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-287053 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1207 20:22:27.862034   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-287053 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.305481561s)
--- PASS: TestJSONOutput/start/Command (66.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-287053 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-287053 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-287053 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-287053 --output=json --user=testUser: (7.10642274s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-040129 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-040129 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.865234ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfbeba2c-c981-4a9e-9f82-8401d6a772ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-040129] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ffebdc5-7a5c-4287-93cc-ed0f494b237b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17719"}}
	{"specversion":"1.0","id":"46509431-ffc3-4d9c-b413-708d7d2283b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7c754ddb-95e1-4078-aac2-72c1375ae936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig"}}
	{"specversion":"1.0","id":"e99241d8-fa2f-434b-967a-642808052d5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube"}}
	{"specversion":"1.0","id":"79f26f40-9091-4359-882a-a5fdf7ae8cfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7bc0e053-805b-4d35-a17d-e13f8d6c4fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5644442c-3952-4776-b746-b7bee9eecb8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-040129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-040129
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (99.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-799687 --driver=kvm2  --container-runtime=crio
E1207 20:23:49.782217   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:24:28.941269   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:28.946578   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:28.956895   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:28.977221   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:29.017625   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:29.098025   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:29.258523   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-799687 --driver=kvm2  --container-runtime=crio: (47.983410442s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-802557 --driver=kvm2  --container-runtime=crio
E1207 20:24:29.579006   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:30.219973   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:31.500451   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:34.062119   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:39.182605   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:24:49.423761   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:25:09.903964   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-802557 --driver=kvm2  --container-runtime=crio: (48.588265798s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-799687
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-802557
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-802557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-802557
helpers_test.go:175: Cleaning up "first-799687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-799687
--- PASS: TestMinikubeProfile (99.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-092336 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1207 20:25:50.865202   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-092336 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.38610448s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-092336 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-092336 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-109389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1207 20:26:05.939794   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-109389 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.911490341s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-092336 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-109389
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-109389: (1.1671868s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-109389
E1207 20:26:33.623168   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:26:41.701514   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-109389: (23.873417634s)
--- PASS: TestMountStart/serial/RestartStopped (24.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-109389 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-660958 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1207 20:27:12.785854   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-660958 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m1.670880985s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-660958 -- rollout status deployment/busybox: (3.882832837s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-jbm9q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-660958 -- exec busybox-5bc68d56bd-vllfc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-660958 -v 3 --alsologtostderr
E1207 20:29:28.941103   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-660958 -v 3 --alsologtostderr: (43.410041461s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-660958 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp testdata/cp-test.txt multinode-660958:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile186535973/001/cp-test_multinode-660958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958:/home/docker/cp-test.txt multinode-660958-m02:/home/docker/cp-test_multinode-660958_multinode-660958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test_multinode-660958_multinode-660958-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958:/home/docker/cp-test.txt multinode-660958-m03:/home/docker/cp-test_multinode-660958_multinode-660958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test_multinode-660958_multinode-660958-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp testdata/cp-test.txt multinode-660958-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile186535973/001/cp-test_multinode-660958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt multinode-660958:/home/docker/cp-test_multinode-660958-m02_multinode-660958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test_multinode-660958-m02_multinode-660958.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m02:/home/docker/cp-test.txt multinode-660958-m03:/home/docker/cp-test_multinode-660958-m02_multinode-660958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test_multinode-660958-m02_multinode-660958-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp testdata/cp-test.txt multinode-660958-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile186535973/001/cp-test_multinode-660958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt multinode-660958:/home/docker/cp-test_multinode-660958-m03_multinode-660958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958 "sudo cat /home/docker/cp-test_multinode-660958-m03_multinode-660958.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 cp multinode-660958-m03:/home/docker/cp-test.txt multinode-660958-m02:/home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 ssh -n multinode-660958-m02 "sudo cat /home/docker/cp-test_multinode-660958-m03_multinode-660958-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-660958 node stop m03: (1.389965086s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-660958 status: exit status 7 (424.001658ms)

                                                
                                                
-- stdout --
	multinode-660958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-660958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-660958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr: exit status 7 (432.576617ms)

                                                
                                                
-- stdout --
	multinode-660958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-660958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-660958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:29:53.982930   32973 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:29:53.983072   32973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:29:53.983084   32973 out.go:309] Setting ErrFile to fd 2...
	I1207 20:29:53.983091   32973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:29:53.983284   32973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 20:29:53.983449   32973 out.go:303] Setting JSON to false
	I1207 20:29:53.983477   32973 mustload.go:65] Loading cluster: multinode-660958
	I1207 20:29:53.983586   32973 notify.go:220] Checking for updates...
	I1207 20:29:53.984023   32973 config.go:182] Loaded profile config "multinode-660958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 20:29:53.984042   32973 status.go:255] checking status of multinode-660958 ...
	I1207 20:29:53.984522   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:53.984590   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:53.998972   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I1207 20:29:53.999407   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:53.999976   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.000020   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.000432   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.000615   32973 main.go:141] libmachine: (multinode-660958) Calling .GetState
	I1207 20:29:54.002232   32973 status.go:330] multinode-660958 host status = "Running" (err=<nil>)
	I1207 20:29:54.002251   32973 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:29:54.002556   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.002589   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.018308   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1207 20:29:54.018666   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.019077   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.019096   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.019394   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.019549   32973 main.go:141] libmachine: (multinode-660958) Calling .GetIP
	I1207 20:29:54.022221   32973 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:29:54.022654   32973 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:29:54.022683   32973 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:29:54.022793   32973 host.go:66] Checking if "multinode-660958" exists ...
	I1207 20:29:54.023172   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.023215   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.036989   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1207 20:29:54.037395   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.037760   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.037783   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.038117   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.038267   32973 main.go:141] libmachine: (multinode-660958) Calling .DriverName
	I1207 20:29:54.038442   32973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:29:54.038473   32973 main.go:141] libmachine: (multinode-660958) Calling .GetSSHHostname
	I1207 20:29:54.040894   32973 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:29:54.041380   32973 main.go:141] libmachine: (multinode-660958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:93:7e", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:27:05 +0000 UTC Type:0 Mac:52:54:00:f5:93:7e Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-660958 Clientid:01:52:54:00:f5:93:7e}
	I1207 20:29:54.041419   32973 main.go:141] libmachine: (multinode-660958) DBG | domain multinode-660958 has defined IP address 192.168.39.19 and MAC address 52:54:00:f5:93:7e in network mk-multinode-660958
	I1207 20:29:54.041567   32973 main.go:141] libmachine: (multinode-660958) Calling .GetSSHPort
	I1207 20:29:54.041754   32973 main.go:141] libmachine: (multinode-660958) Calling .GetSSHKeyPath
	I1207 20:29:54.042007   32973 main.go:141] libmachine: (multinode-660958) Calling .GetSSHUsername
	I1207 20:29:54.042156   32973 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958/id_rsa Username:docker}
	I1207 20:29:54.131094   32973 ssh_runner.go:195] Run: systemctl --version
	I1207 20:29:54.137215   32973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:29:54.152911   32973 kubeconfig.go:92] found "multinode-660958" server: "https://192.168.39.19:8443"
	I1207 20:29:54.152941   32973 api_server.go:166] Checking apiserver status ...
	I1207 20:29:54.152988   32973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:29:54.166462   32973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1064/cgroup
	I1207 20:29:54.176255   32973 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod3be2f0b39689e91f9171b575c679c7c3/crio-6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c"
	I1207 20:29:54.176332   32973 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3be2f0b39689e91f9171b575c679c7c3/crio-6feb8b3d9d8e69b81f6eb7f6c5ad15c287d21f7bc6ea1ed35fc5a363d7cd203c/freezer.state
	I1207 20:29:54.186022   32973 api_server.go:204] freezer state: "THAWED"
	I1207 20:29:54.186044   32973 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1207 20:29:54.191131   32973 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1207 20:29:54.191150   32973 status.go:421] multinode-660958 apiserver status = Running (err=<nil>)
	I1207 20:29:54.191158   32973 status.go:257] multinode-660958 status: &{Name:multinode-660958 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 20:29:54.191177   32973 status.go:255] checking status of multinode-660958-m02 ...
	I1207 20:29:54.191448   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.191479   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.205776   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I1207 20:29:54.206193   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.206633   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.206652   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.206922   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.207098   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetState
	I1207 20:29:54.208559   32973 status.go:330] multinode-660958-m02 host status = "Running" (err=<nil>)
	I1207 20:29:54.208579   32973 host.go:66] Checking if "multinode-660958-m02" exists ...
	I1207 20:29:54.208838   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.208869   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.222785   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I1207 20:29:54.223109   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.223551   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.223572   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.223885   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.224060   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetIP
	I1207 20:29:54.226645   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:29:54.227082   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:29:54.227135   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:29:54.227237   32973 host.go:66] Checking if "multinode-660958-m02" exists ...
	I1207 20:29:54.227653   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.227695   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.241696   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I1207 20:29:54.242086   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.242462   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.242482   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.242753   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.242908   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .DriverName
	I1207 20:29:54.243060   32973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:29:54.243082   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHHostname
	I1207 20:29:54.245476   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:29:54.245854   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:1e:84", ip: ""} in network mk-multinode-660958: {Iface:virbr1 ExpiryTime:2023-12-07 21:28:13 +0000 UTC Type:0 Mac:52:54:00:ec:1e:84 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-660958-m02 Clientid:01:52:54:00:ec:1e:84}
	I1207 20:29:54.245892   32973 main.go:141] libmachine: (multinode-660958-m02) DBG | domain multinode-660958-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:ec:1e:84 in network mk-multinode-660958
	I1207 20:29:54.246056   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHPort
	I1207 20:29:54.246195   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHKeyPath
	I1207 20:29:54.246338   32973 main.go:141] libmachine: (multinode-660958-m02) Calling .GetSSHUsername
	I1207 20:29:54.246495   32973 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17719-9628/.minikube/machines/multinode-660958-m02/id_rsa Username:docker}
	I1207 20:29:54.329531   32973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:29:54.342937   32973 status.go:257] multinode-660958-m02 status: &{Name:multinode-660958-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1207 20:29:54.342977   32973 status.go:255] checking status of multinode-660958-m03 ...
	I1207 20:29:54.343280   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1207 20:29:54.343326   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1207 20:29:54.358283   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I1207 20:29:54.358788   32973 main.go:141] libmachine: () Calling .GetVersion
	I1207 20:29:54.359286   32973 main.go:141] libmachine: Using API Version  1
	I1207 20:29:54.359316   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1207 20:29:54.359675   32973 main.go:141] libmachine: () Calling .GetMachineName
	I1207 20:29:54.359861   32973 main.go:141] libmachine: (multinode-660958-m03) Calling .GetState
	I1207 20:29:54.361701   32973 status.go:330] multinode-660958-m03 host status = "Stopped" (err=<nil>)
	I1207 20:29:54.361713   32973 status.go:343] host is not running, skipping remaining checks
	I1207 20:29:54.361718   32973 status.go:257] multinode-660958-m03 status: &{Name:multinode-660958-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 node start m03 --alsologtostderr
E1207 20:29:56.626059   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-660958 node start m03 --alsologtostderr: (33.127613535s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-660958 node delete m03: (1.008124926s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-660958 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1207 20:44:28.941590   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:46:05.939196   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:46:41.701314   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:49:28.941715   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 20:49:44.746918   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 20:51:05.938981   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 20:51:41.699879   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-660958 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.678437806s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-660958 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-660958
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-660958-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-660958-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.244235ms)

                                                
                                                
-- stdout --
	* [multinode-660958-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-660958-m02' is duplicated with machine name 'multinode-660958-m02' in profile 'multinode-660958'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-660958-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-660958-m03 --driver=kvm2  --container-runtime=crio: (48.756081852s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-660958
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-660958: exit status 80 (236.666079ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-660958
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-660958-m03 already exists in multinode-660958-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-660958-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.89s)

                                                
                                    
x
+
TestScheduledStopUnix (118.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-987047 --memory=2048 --driver=kvm2  --container-runtime=crio
E1207 20:57:31.988020   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-987047 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.168749368s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-987047 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-987047 -n scheduled-stop-987047
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-987047 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-987047 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-987047 -n scheduled-stop-987047
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-987047
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-987047 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-987047
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-987047: exit status 7 (73.1151ms)

                                                
                                                
-- stdout --
	scheduled-stop-987047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-987047 -n scheduled-stop-987047
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-987047 -n scheduled-stop-987047: exit status 7 (78.330109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-987047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-987047
--- PASS: TestScheduledStopUnix (118.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (242.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1207 21:01:05.939335   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.118273317s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-963951
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-963951: (2.228632335s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-963951 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-963951 status --format={{.Host}}: exit status 7 (80.568037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.496767742s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-963951 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.926517ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-963951] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-963951
	    minikube start -p kubernetes-upgrade-963951 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9639512 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-963951 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-963951 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.900127883s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-963951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-963951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-963951: (1.17166113s)
--- PASS: TestKubernetesUpgrade (242.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.454063ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-797842] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (134.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-797842 --driver=kvm2  --container-runtime=crio
E1207 20:59:28.942247   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-797842 --driver=kvm2  --container-runtime=crio: (2m14.069380346s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-797842 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (134.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.605695556s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-797842 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-797842 status -o json: exit status 2 (263.152659ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-797842","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-797842
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-797842: (1.0705882s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-797842 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.968361524s)
--- PASS: TestNoKubernetes/serial/Start (27.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-797842 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-797842 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.091346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-797842
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-797842: (1.244051277s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-797842 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-797842 --driver=kvm2  --container-runtime=crio: (27.203350947s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-797842 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-797842 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.552826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (103.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-763966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-763966 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m43.026275079s)
--- PASS: TestPause/serial/Start (103.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-715748 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-715748 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (118.111985ms)

                                                
                                                
-- stdout --
	* [false-715748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 21:05:06.258715   45510 out.go:296] Setting OutFile to fd 1 ...
	I1207 21:05:06.258845   45510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:05:06.258853   45510 out.go:309] Setting ErrFile to fd 2...
	I1207 21:05:06.258858   45510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 21:05:06.259035   45510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-9628/.minikube/bin
	I1207 21:05:06.259621   45510 out.go:303] Setting JSON to false
	I1207 21:05:06.260527   45510 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6452,"bootTime":1701976654,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 21:05:06.260586   45510 start.go:138] virtualization: kvm guest
	I1207 21:05:06.262972   45510 out.go:177] * [false-715748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1207 21:05:06.264667   45510 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 21:05:06.264714   45510 notify.go:220] Checking for updates...
	I1207 21:05:06.267725   45510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 21:05:06.269469   45510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-9628/kubeconfig
	I1207 21:05:06.271218   45510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-9628/.minikube
	I1207 21:05:06.272932   45510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 21:05:06.274556   45510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 21:05:06.276620   45510 config.go:182] Loaded profile config "cert-options-620116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:05:06.276722   45510 config.go:182] Loaded profile config "pause-763966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1207 21:05:06.276768   45510 config.go:182] Loaded profile config "stopped-upgrade-099448": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1207 21:05:06.276839   45510 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 21:05:06.313778   45510 out.go:177] * Using the kvm2 driver based on user configuration
	I1207 21:05:06.315521   45510 start.go:298] selected driver: kvm2
	I1207 21:05:06.315539   45510 start.go:902] validating driver "kvm2" against <nil>
	I1207 21:05:06.315552   45510 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 21:05:06.317897   45510 out.go:177] 
	W1207 21:05:06.319621   45510 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1207 21:05:06.321215   45510 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-715748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-715748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-715748"

                                                
                                                
----------------------- debugLogs end: false-715748 [took: 3.301497648s] --------------------------------
helpers_test.go:175: Cleaning up "false-715748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-715748
--- PASS: TestNetworkPlugins/group/false (3.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (166.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-483745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-483745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m46.70272262s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (166.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (232.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-950431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1207 21:06:24.747541   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-950431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (3m52.138432087s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (232.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-099448
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (104.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-598346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-598346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m44.322635436s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (104.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-483745 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3dba3552-a582-43cb-9622-fd55b9c1db53] Pending
helpers_test.go:344: "busybox" [3dba3552-a582-43cb-9622-fd55b9c1db53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3dba3552-a582-43cb-9622-fd55b9c1db53] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.040297879s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-483745 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-483745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-483745 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-275828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-275828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m41.377920633s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-598346 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1ad51f4f-b5f5-4c4c-a0a1-98a452b265a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1ad51f4f-b5f5-4c4c-a0a1-98a452b265a7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.030494702s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-598346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-598346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-598346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.812266006s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-598346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-950431 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6006c7f0-a4b5-47b6-b868-2b11d25891e8] Pending
helpers_test.go:344: "busybox" [6006c7f0-a4b5-47b6-b868-2b11d25891e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6006c7f0-a4b5-47b6-b868-2b11d25891e8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.033587162s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-950431 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40929895-a56a-4b7c-8f5e-2bf0e8711984] Pending
helpers_test.go:344: "busybox" [40929895-a56a-4b7c-8f5e-2bf0e8711984] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [40929895-a56a-4b7c-8f5e-2bf0e8711984] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.030025656s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-950431 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-950431 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-275828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-275828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008215293s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-275828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (803.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-483745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1207 21:10:48.986043   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-483745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m22.860151499s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-483745 -n old-k8s-version-483745
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (803.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (565.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-598346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1207 21:11:41.700053   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-598346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m25.151819658s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-598346 -n embed-certs-598346
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (565.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (531.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-950431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-950431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (8m51.445145931s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-950431 -n no-preload-950431
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (531.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (510.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-275828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1207 21:14:11.988957   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 21:14:28.941812   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 21:16:05.939487   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:16:41.700370   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 21:19:28.941130   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-275828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m30.673025823s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275828 -n default-k8s-diff-port-275828
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (510.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155321 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1207 21:36:05.939267   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155321 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (59.771601557s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-155321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-155321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.784051525s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m6.740293321s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.369402671s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2gpsb" [8c28348b-54a0-4abf-a90f-db7298fd26ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2gpsb" [8c28348b-54a0-4abf-a90f-db7298fd26ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010449586s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1207 21:38:00.631117   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.636390   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.646884   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.667200   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.707523   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.787915   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:00.948633   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:01.268735   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:01.909449   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:03.190004   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:05.750885   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:38:10.872082   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m36.763654849s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f5p4g" [7d1ce3bd-6313-43a2-b6ce-42f0b28f8639] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022947669s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qjjpl" [7c902b7e-6ad8-4de0-995a-602aab30f748] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qjjpl" [7c902b7e-6ad8-4de0-995a-602aab30f748] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.017932761s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.5404659s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (423.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155321 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155321 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (7m2.617481054s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155321 -n newest-cni-155321
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (423.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (393.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1207 21:39:22.553656   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:39:28.941849   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (6m33.103883413s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (393.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-c4qnh" [b91cfc4a-6a76-486a-ab1a-1e92f8327900] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024969093s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w5vrp" [b54d317b-5681-47aa-82ab-9b84e253d43a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w5vrp" [b54d317b-5681-47aa-82ab-9b84e253d43a] Running
E1207 21:39:44.748753   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013871908s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (337.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (5m37.327137871s)
--- PASS: TestNetworkPlugins/group/flannel/Start (337.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-715748 replace --force -f testdata/netcat-deployment.yaml
E1207 21:40:07.607068   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:07.612333   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:07.622546   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:07.642799   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z2gxb" [4aa3b4cd-265b-404c-bf36-b7234bf23c7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 21:40:07.683744   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:07.764070   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:07.924539   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:08.245137   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:08.885641   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:10.165839   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:11.253896   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.259162   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.269416   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.289776   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.330081   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.410437   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.570828   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:11.891874   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:12.532777   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:12.726532   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z2gxb" [4aa3b4cd-265b-404c-bf36-b7234bf23c7b] Running
E1207 21:40:13.812926   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:16.373972   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:40:17.847069   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.01511811s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (305.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1207 21:40:44.476608   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:40:48.567831   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:40:52.215748   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:41:05.938925   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:41:29.528361   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:41:33.177209   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:41:41.699526   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/addons-757601/client.crt: no such file or directory
E1207 21:42:25.735393   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:25.740662   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:25.750899   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:25.771197   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:25.811479   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:25.891981   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:26.052504   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:26.373133   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:27.014028   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:28.294654   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:30.854955   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:35.975530   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:46.215762   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:42:51.449495   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:42:55.097539   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:43:00.631186   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:43:06.695990   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:43:19.886408   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:19.891663   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:19.902004   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:19.922391   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:19.962765   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:20.043167   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:20.203622   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:20.524257   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:21.165329   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:22.446453   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:25.007578   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:28.317253   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/old-k8s-version-483745/client.crt: no such file or directory
E1207 21:43:30.128732   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:40.369225   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:43:47.656679   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:44:00.850171   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:44:08.987997   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/functional-785124/client.crt: no such file or directory
E1207 21:44:28.941189   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/ingress-addon-legacy-393627/client.crt: no such file or directory
E1207 21:44:31.234620   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.239877   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.250148   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.270585   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.310948   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.391285   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.552037   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:31.872770   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:32.512932   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:33.793337   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:36.353836   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:41.474951   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:44:41.810633   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/kindnet-715748/client.crt: no such file or directory
E1207 21:44:51.715753   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:45:07.607020   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
E1207 21:45:07.649268   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.654624   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.664941   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.685246   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.725881   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.806188   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:07.966325   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:08.286895   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:08.927956   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:09.577334   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/auto-715748/client.crt: no such file or directory
E1207 21:45:10.208960   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:11.253831   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
E1207 21:45:12.196584   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
E1207 21:45:12.769610   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:17.890688   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
E1207 21:45:28.130861   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-715748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (5m5.811001734s)
--- PASS: TestNetworkPlugins/group/bridge/Start (305.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gw64b" [1b852b21-2f37-4e0f-93f4-57c4c75ccafc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 21:45:35.290375   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/no-preload-950431/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gw64b" [1b852b21-2f37-4e0f-93f4-57c4c75ccafc] Running
E1207 21:45:38.938542   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/default-k8s-diff-port-275828/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.012078766s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jspqn" [aae62ac8-b20f-4169-96ed-e0c6023e82b9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026417928s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-slwqs" [1e0370ad-4b25-43b2-aef4-c2437e3b6f79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-slwqs" [1e0370ad-4b25-43b2-aef4-c2437e3b6f79] Running
E1207 21:45:53.157266   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/calico-715748/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.020532059s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-715748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-715748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9fcjp" [c3ade187-ca25-4794-859b-477ca6f26e17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 21:45:48.611428   16840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-9628/.minikube/profiles/custom-flannel-715748/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9fcjp" [c3ade187-ca25-4794-859b-477ca6f26e17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.02006938s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-155321 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-715748 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-715748 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.206267742s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-155321 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155321 -n newest-cni-155321
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155321 -n newest-cni-155321: exit status 2 (294.191282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155321 -n newest-cni-155321
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155321 -n newest-cni-155321: exit status 2 (290.467553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-155321 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155321 -n newest-cni-155321
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155321 -n newest-cni-155321
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-715748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-715748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (39/299)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.1/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.1/binaries 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
255 TestStartStop/group/disable-driver-mounts 0.15
261 TestNetworkPlugins/group/kubenet 3.41
269 TestNetworkPlugins/group/cilium 3.86
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-121798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-121798
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-715748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-715748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-715748"

                                                
                                                
----------------------- debugLogs end: kubenet-715748 [took: 3.244504075s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-715748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-715748
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-715748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-715748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-715748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-715748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-715748"

                                                
                                                
----------------------- debugLogs end: cilium-715748 [took: 3.701974649s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-715748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-715748
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard